I'm trying to develop a simple web scraper. I want to extract text without the HTML code. In fact, I achieve this goal, but I have seen that in some pages where JavaScript is loaded I didn't obtain good results.
For example, if some JavaScript code adds some text, I can't see it, because when I call
response = urllib2.urlopen(request)
I get the original text without the added one (because JavaScript is executed in the client).
So, I'm looking for some ideas to solve this problem.
We are not getting the correct results because any javascript generated content needs to be rendered on the DOM. When we fetch an HTML page, we fetch the initial, unmodified by javascript, DOM.
Therefore we need to render the javascript content before we crawl the page.
As selenium is already mentioned many times in this thread (and how slow it gets sometimes was mentioned also), I will list two other possible solutions.
Solution 1: This is a very nice tutorial on how to use Scrapy to crawl javascript generated content and we are going to follow just that.
What we will need:
Docker installed in our machine. This is a plus over other solutions until this point, as it utilizes an OS-independent platform.
Install Splash following the instruction listed for our corresponding OS.
Quoting from splash documentation:
Essentially we are going to use Splash to render Javascript generated content.
Run the splash server:
sudo docker run -p 8050:8050 scrapinghub/splash
.Install the scrapy-splash plugin:
pip install scrapy-splash
Assuming that we already have a Scrapy project created (if not, let's make one), we will follow the guide and update the
settings.py
:Finally, we can use a
SplashRequest
:Solution 2: Let's call this experimental at the moment (May 2018)...
This solution is for Python's version 3.6 only (at the moment).
Do you know the requests module (well how doesn't)?
Now it has a web crawling little sibling: requests-HTML:
Install requests-html:
pipenv install requests-html
Make a request to the page's url:
Render the response to get the Javascript generated bits:
Finally, the module seems to offer scraping capabilities.
Alternatively, we can try the well-documented way of using BeautifulSoup with the
r.html
object we just rendered.Maybe selenium can do it.
EDIT 30/Dec/2017: This answer appears in top results of Google searches, so I decided to update it. The old answer is still at the end.
dryscape isn't maintained anymore and the library dryscape developers recommend is Python 2 only. I have found using Selenium's python library with Phantom JS as a web driver fast enough and easy to get the work done.
Once you have installed Phantom JS, make sure the
phantomjs
binary is available in the current path:Example
To give an example, I created a sample page with following HTML code. (link):
without javascript it says:
No javascript support
and with javascript:Yay! Supports javascript
Scraping without JS support:
Scraping with JS support:
You can also use Python library dryscrape to scrape javascript driven websites.
Scraping with JS support:
You can also execute javascript using webdriver.
or store the value in a variable
A mix of BeautifulSoup and Selenium works very well for me.
P.S. You can find more wait conditions here
It sounds like the data you're really looking for can be accessed via secondary URL called by some javascript on the primary page.
While you could try running javascript on the server to handle this, a simpler approach to might be to load up the page using Firefox and use a tool like Charles or Firebug to identify exactly what that secondary URL is. Then you can just query that URL directly for the data you are interested in.