Cleaning Data & Association Rules - R

2020-05-08 07:01发布

I am trying to tidy the following dataset (in link) in R and then run an association rules below.

https://www.kaggle.com/fanatiks/shopping-cart

install.packages("dplyr")
library(dplyr)

df <- read.csv("Groceries (2).csv", header = F, stringsAsFactors = F, na.strings=c(""," ","NA"))
install.packages("stringr")
library(stringr)
temp1<- (str_extract(df$V1, "[a-z]+"))
temp2<- (str_extract(df$V1, "[^a-z]+"))
df<- cbind(temp1,df)
df[2] <- NULL
df[35] <- NULL
View(df)

summary(df)
str(df)

trans <- as(df,"transactions")

I get the following error when I run the above trans <- as(df,"transactions") code:

Warning message: Column(s) 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34 not logical or factor. Applying default discretization (see '? discretizeDF').

summary(trans)

When I run the above code, I get the following:

transactions as itemMatrix in sparse format with
 1499 rows (elements/itemsets/transactions) and
 1268 columns (items) and a density of 0.01529042 

most frequent items:
  V5= vegetables   V6= vegetables temp1=vegetables   V2= vegetables 
             140              113              109              108 
  V9= vegetables          (Other) 
             103            28490 

The attached results is showing all the vegetable values as separate items instead of a combined vegetable score which is obviously increasing my number of columns. I am not sure why this is happening?

fit<-apriori(trans,parameter=list(support=0.006,confidence=0.25,minlen=2))
fit<-sort(fit,by="support")
inspect(head(fit))

1条回答
神经病院院长
2楼-- · 2020-05-08 07:45

For coercion to transaction class the dataframe needs to be made up of factor columns. You have a dataframe of characters - hence the error message. The data requires some further cleaning in order to get it to coerce properly.

I'm not very familiar with the arules package but I believe the read.transactions function may be more useful as it would automatically discard duplicates. I found it easiest to make a binary matrix and use a for loop, but I am sure there is a neater solution.

Continuing on directly from your code:

items <- as.character(unique(unlist(df))) # get all unique items
items <- items[which(str_detect(items, "[a-z]"))] # remove numbers


trans <- matrix(0, nrow = nrow(df), ncol = length(items))


for(i in 1:nrow(df)){
  trans[i,which(items %in% t(df[i,]))] <- 1
}

colnames(trans) <- items
rownames(trans) <- temp2

trans <- as(trans, "transactions")

summary(trans)

Giving

transactions as itemMatrix in sparse format with
 1637 rows (elements/itemsets/transactions) and
 38 columns (items) and a density of 0.3359965 

most frequent items:
 vegetables     poultry     waffles   ice cream  lunch meat     (Other) 
       1058         582         562         556         555       17588 

element (itemset/transaction) length distribution:
sizes
  0   1   3   4   5   6   7   8   9  10  11  12  13  14  15  16  17  18  19  20  21  22  23  24  25  26 
102  36   8  57  51  51  71  69  63  80  79  58  84  91  72 105  97  87 114  91  82  46  30   7   4   2 

   Min. 1st Qu.  Median    Mean 3rd Qu.    Max. 
   0.00    8.00   14.00   12.77   18.00   26.00 

includes extended item information - examples:
    labels
1     pork
2  shampoo
3    juice

includes extended transaction information - examples:
  transactionID
1      1/1/2000
2      1/1/2000
3      2/1/2000
查看更多
登录 后发表回答