I want to create a scraper using Google Spreadsheets with Google Apps Script. I know it is possible and I have seen some tutorials and threads about it.
The main idea is to use:
var html = UrlFetchApp.fetch('http://en.wikipedia.org/wiki/Document_Object_Model').getContentText();
var doc = XmlService.parse(html);
And then get and work with the elements. However, the method
XmlService.parse()
Does not work for some page. For example, if I try:
function test(){
var html = UrlFetchApp.fetch("https://www.nespresso.com/br/pt/product/maquina-de-cafe-espresso-pixie-clips-preto-lima-neon-c60-220v").getContentText();
var parse = XmlService.parse(html);
}
I get the following error:
Error on line 225: The entity name must immediately follow the '&' in the entity reference. (line 3, file "")
I've tried to use string.replace()
to eliminate the characters that apparently are causing the error, but it does not work. All sort of other errors appear. The following code for example:
function test(){
var html = UrlFetchApp.fetch("https://www.nespresso.com/br/pt/product/maquina-de-cafe-espresso-pixie-clips-preto-lima-neon-c60-220v").getContentText();
var regExp = new RegExp("&", "gi");
html = html.replace(regExp,"");
var parse = XmlService.parse(html);
}
Gives me the following error:
Error on line 358: The content of elements must consist of well-formed character data or markup. (line 6, file "")
I believe this is a problem with the XmlService.parse()
method.
I've read in this threads:
Google App Script parse table from messed html and What is the best way to parse html in google apps script that one can use a deprecated method called xml.parse()
which does accept a second parameter that allows parsing HTML. However, as I've mentioned, it is deprecated and I can not find any documentation on it anywhere. The xml.parse()
seems to parse the string, but I have trouble working with the elements due to the lack of documentation. And it's also not the safest long term solution, because it can be deactivated any time soon.
So, I want to know how do I parse this HTML in Google Apps Script?
I also tried:
function test(){
var html = UrlFetchApp.fetch("https://www.nespresso.com/br/pt/product/maquina-de-cafe-espresso-pixie-clips-preto-lima-neon-c60-220v").getContentText();
var htmlOutput = HtmlService.createHtmlOutput(html).getContent();
var parse = XmlService.parse(htmlOutput);
}
But it does not work, I get this error:
Malformed HTML content:
I thought about using a open source library to parse the HTML, but I could not find any.
My ultimate goal is to get some information from a set of pages like Price, Link, Name of the products, etc. I've manage to do this using a series of RegEx:
var ss = SpreadsheetApp.getActiveSpreadsheet();
var linksSheet = ss.getSheetByName("Links");
var resultadosSheet = ss.getSheetByName("Resultados");
function scrapyLoco(){
var links = linksSheet.getRange(1, 1, linksSheet.getLastRow(), 1).getValues();
var arrayGrandao = [];
for (var row = 0, len = links.length; row < len; row++){
var link = links[row];
var arrayDeResultados = pegarAsCoisas(link[0]);
Logger.log(arrayDeResultados);
arrayGrandao.push(arrayDeResultados);
}
resultadosSheet.getRange(2, 1, arrayGrandao.length, arrayGrandao[0].length).setValues(arrayGrandao);
}
function pegarAsCoisas(linkDoProduto) {
var resultadoArray = [];
var html = UrlFetchApp.fetch(linkDoProduto).getContentText();
var regExp = new RegExp("<h1([^]*)h1>", "gi");
var h1Html = regExp.exec(html);
var h1Parse = XmlService.parse(h1Html[0]);
var h1Output = h1Parse.getRootElement().getText();
h1Output = h1Output.replace(/(\r\n|\n|\r|(^( )*))/gm,"");
regExp = new RegExp("Ref.: ([^(])*", "gi");
var codeHtml = regExp.exec(html);
var codeOutput = codeHtml[0].replace("Ref.: ","").replace(" ","");
regExp = new RegExp("margin-top: 5px; margin-bottom: 5px; padding: 5px; background-color: #699D15; color: #fff; text-align: center;([^]*)/div>", "gi");
var descriptionHtml = regExp.exec(html);
var regExp = new RegExp("<p([^]*)p>", "gi");
var descriptionHtml = regExp.exec(descriptionHtml);
var regExp = new RegExp("^[^.]*", "gi");
var descriptionHtml = regExp.exec(descriptionHtml);
var descriptionOutput = descriptionHtml[0].replace("<p>","");
descriptionOutput = descriptionOutput+".";
regExp = new RegExp("ecom(.+?)Main.png", "gi");
var imageHtml = regExp.exec(html);
var comecoDaURL = "https://www.nespresso.com/";
var imageOutput = comecoDaURL+imageHtml[0];
var regExp = new RegExp("nes_l-float nes_big-price nes_big-price-with-out([^]*)p>", "gi");
var precoHtml = regExp.exec(html);
var regExp = new RegExp("[0-9]*,", "gi");
precoHtml = regExp.exec(precoHtml);
var precoOutput = "BRL "+precoHtml[0].replace(",","");
resultadoArray = [codeOutput,h1Output,descriptionOutput,"Home & Garden > Kitchen & Dining > Kitchen Appliances > Coffee Makers & Espresso Machines",
"Máquina",linkDoProduto,imageOutput,"new","in stock",precoOutput,"","","","Nespresso",codeOutput];
return resultadoArray;
}
But this is very timing consuming to program, it is very hard to change it dynamically and is not very reliable.
I need a way to parse this HTML and easily access its elements. It´s actually not a add on. but a simple google app script..
maybe not the cleanest approach, but simple string processing does the job too without xmlservice:
I made cheeriogs for your problem. it's works on GAS as cheerio which is jQuery-like api. You can do that like this.
See also https://github.com/asciian/cheeriogs
I have done this in vanilla js. Not real html parsing. Just try to get some content out of a string (url):
This has been discussed before. See here: What is the best way to parse html in google apps script
Unlike XML service, the XMLService is not very forgiving of malformed html. The trick in the answer by Justin Bicknell does the job. Even though XML service has been deprecated, it still continues to work.
Please be aware that certain web sites may not permit automated scraping of their content, so please consult their terms or service before using Apps Script to extract the content.
The
XmlService
only works against valid XML documents, and most HTML (especially HTML5), is not valid XML. A previous version of theXmlService
, simply calledXml
, allowed for "lenient" parsing, which would allow it to parse HTML as well. This service was sunset in 2013, but for the time being still functions. The reference docs are no longer available, but this old tutorial shows it's usage.Another alternative is to use a service like Kimono, which handles the scraping and parsing parts and provides a simple API you can call via
UrlFetchApp
to retrieve the structured data.I´ve found a very neat alternative to scrape using Google App Script. It is called PhantomJS Cloud. One can use the urlFetchApp to access the API. This allows to execute Jquery code on the pages, which makes life so much simpler.