I'm looking for a way to speed up the following approach. Any pointers are very welcome. Where are the bottlenecks?
Say I have the following data.frame
:
df <- data.frame(names=c("A ADAM", "S BEAN", "A APPLE", "J BOND", "J BOND"),
v1=c("Test_a", "Test_b", "Test_a", "Test_b", "Test_b"),
v2=c("Test_c", "Test_c", "Test_d", "Test_d", "Test_d"))
I want to compare each pair of rows in df
on their JaroWinkler similarity.
With some help of others (see this post), I've been able to construct this code:
#columns to compare
testCols <- c("names", "v1", "v2")
#compare pairs
RowCompare= function(x){
comp <- NULL
pairs <- t(combn(nrow(x),2))
for(i in 1:nrow(pairs)){
row_a <- pairs[i,1]
row_b <- pairs[i,2]
a_tests <- x[row_a,testCols]
b_tests <- x[row_b,testCols]
comp <- rbind(comp, c(row_a, row_b, TestsCompare(a_tests, b_tests)))
}
colnames(comp) <- c("row_a","row_b","names_j","v1_j","v2_j")
return(comp)
}
#define TestsCompare
TestsCompare=function(x,y){
names_j <- stringdist(x$names, y$names, method = "jw")
v1_j <-stringdist(x$v1, y$v1, method = "jw")
v2_j <-stringdist(x$v2, y$v2, method = "jw")
c(names_j,v1_j, v2_j)
}
This generates the correct output:
output = as.data.frame(RowCompare(df))
> output
row_a row_b names_j v1_j v2_j
1 1 2 0.4444444 0.1111111 0.0000000
2 1 3 0.3571429 0.0000000 0.1111111
3 1 4 0.4444444 0.1111111 0.1111111
4 1 5 0.4444444 0.1111111 0.1111111
5 2 3 0.4603175 0.1111111 0.1111111
6 2 4 0.3333333 0.0000000 0.1111111
7 2 5 0.3333333 0.0000000 0.1111111
8 3 4 0.5634921 0.1111111 0.0000000
9 3 5 0.5634921 0.1111111 0.0000000
10 4 5 0.0000000 0.0000000 0.0000000
However, my real data.frame has 8 million observations and I make 17 comparisons. To run this code takes days...
I am looking for ways to speed up this process:
- Should I use matrices instead of data.frames?
- How to parallelize this process?
- Vectorize?