消毒文字的Mechanical Turk?(Sanitize text for Mechanical

2019-09-19 03:42发布

是否有一个预先存在的功能,消毒一个data.frame的字符列的Mechanical Turk? 下面是它变得挂了线的一个实例:

x <- "Duke\U3e32393cs B or C, no concomittant malignancy, ulcerative colitis, Crohn\U3e32393cs disease, renal, heart or liver failure"

我认为这些都是Unicode字符,但MT没有让我跟他们继续在那里。 我能明显的正则表达式这些出相当容易,但我用MT体面一下,希望的是更通用的解决方案,以消除所有非ASCII字符。

编辑

我可以按以下方式删除编码:

> iconv(x,from="UTF-8",to="latin1",sub=".")
[1] "Duke......s B or C, no concomittant malignancy, ulcerative colitis, Crohn......s disease, renal, heart or liver failure"

但是,这仍然让我缺乏对使用非UTF8编码的任何元素的载体更通用的解决方案。

> dput(vec)
c("Colorectal cancer patients Duke\U3e32393cs B or C, no concomittant malignancy, ulcerative colitis, Crohn\U3e32393cs disease, renal, heart or liver failure", 
"Patients with Parkinson\U3e32393cs Disease not already on levodopa", 
"hi")

注意,普通的文本编码是“未知”,它没有转化为使用的iconv在“latin1”,那么简单的解决方案失败。 我有以低于更细致的解决方案一个尝试,但我不是很满意。

Answer 1:

要采取刺在回答我的问题,希望有人有更好的办法,因为我不相信这会处理所有时髦的文字:

sanitize.text <- function(x) {
  stopifnot(is.character(x))
  sanitize.each.element <- function(elem) {
    ifelse(
      Encoding(elem)=="unknown",
      elem,
      iconv(elem,from=as.character(Encoding(elem)),to="latin1",sub="")
    )
  }
  x <- sapply(x, sanitize.each.element)
  names(x) <- NULL
  x
}

> sanitize.text(vec)
[1] "Colorectal cancer patients Dukes B or C, no concomittant malignancy, ulcerative colitis, Crohns disease, renal, heart or liver failure"
[2] "Patients with Parkinsons Disease not already on levodopa"                                                                              
[3] "hi"   

和一个函数来处理MT的其他进口怪癖:

library(taRifx)
write.sanitized.csv <- function( x, file="", ... ) {
  sanitize.text <- function(x) {
    stopifnot(is.character(x))
    sanitize.each.element <- function(elem) {
      ifelse(
        Encoding(elem)=="unknown",
        elem,
        iconv(elem,from=as.character(Encoding(elem)),to="latin1",sub="")
      )
    }
    x <- sapply(x, sanitize.each.element)
    names(x) <- NULL
    x
  }
  x <- japply( df=x, sel=sapply(x,is.character), FUN=sanitize.text)
  colnames(x) <- gsub("[^a-zA-Z0-9_]", "_", colnames(x) )
  write.csv( x, file, row.names=FALSE, ... )
}

编辑

由于缺乏一个更好的地方把这个代码,你可以计算出该特征向量的因素导致,即使上面的功能将不喜欢的东西解决问题:

#' Function to locate a non-ASCII character
#' @param txt A character vector
#' @return A logical of length length(txt) 
locateBadString <- function(txt) {
  vapply(txt, function(x) {
    class( try( substr( x, 1, nchar(x) ) ) )!="try-error"
  }, TRUE )
}

EDIT2

认为 ,这应该工作:

iconv(x, to = "latin1", sub="")

由于@Masoud在这样的回答: https://stackoverflow.com/a/20250920/636656



文章来源: Sanitize text for Mechanical Turk?