R rvest encoding errors with UTF-8

2019-04-10 09:39发布

问题:

I'm trying to get this table from Wikipedia. The source of the file clamis it's UTF-8:

> <!DOCTYPE html> <html lang="en" dir="ltr" class="client-nojs"> <head>
> <meta charset="UTF-8"/> <title>List of cities in Colombia - Wikipedia,
> the free encyclopedia</title>
> ...

However, when I try to get the table with rvest it shows weird characters where there should be accented (standard spanish) ones like á, é, etc. This is what I attempted:

theurl <- "https://en.wikipedia.org/wiki/List_of_cities_in_Colombia"
file <- read_html(theurl, encoding = "UTF-8")
tables <- html_nodes(file, "table")
pop <- html_table(tables[[2]])
head(pop)

##   No.         City Population         Department
## 1   1      Bogotá  6.840.116       Cundinamarca
## 2   2    Medellín  2.214.494          Antioquia
## 3   3         Cali  2.119.908    Valle del Cauca
## 4   4 Barranquilla  1.146.359         Atlántico
## 5   5    Cartagena    892.545           Bolívar
## 6   6      Cúcuta    587.676 Norte de Santander

I have attempted to repair the encoding, as suggested in other SO questions, with:

repair_encoding(pop)

## Best guess: UTF-8 (100% confident)
## Error in stringi::stri_conv(x, from = from) : 
##   all elements in `str` should be a raw vectors

I've tested several different encodings (latin1, and others provided by guess_encoding(), but all of them produce similarly incorrect results.

How can I properly load this table?

回答1:

It looks like you have to use repair_encoding on a character vector, not an entire dataframe...

> repair_encoding(head(pop[,2]))
Best guess: UTF-8 (80% confident)
[1] "Bogotá"       "Medellín"     "Cali"         "Barranquilla"
[5] "Cartagena"    "Cúcuta"