Many times when crawling we run into problems where content that is rendered on the page is generated with Javascript and therefore scrapy is unable to crawl for it (eg. ajax requests, jQuery)
相关问题
- Views base64 encoded blob in HTML with PHP
- Laravel Option Select - Default Issue
- PHP Recursively File Folder Scan Sorted by Modific
- Can php detect if javascript is on or not?
- Using similar_text and strpos together
You want to have a look at phantomjs. There is this php implementation:
http://jonnnnyw.github.io/php-phantomjs/
if you need to have it working with php of course.
You could read the page and then feed the contents to Guzzle, in order to use the nice functions that Guzzle gives you (like search for contents, etc...). That would depend on your needs, maybe you can simply use the dom, like this:
How to get element by class name?
Here is some working code.
Only disadvantage of using phantom, it will be slower than guzzle, but of course, you have to wait for all those pesky js to be loaded.
Guzzle (which Goutte uses internally) is an HTTP client. As a result, javascript content will not be parsed or executed. Javascript files which reside outside of the requested endpoint will not be downloaded.
Depending upon your environment, I suppose it would be possible to utilize PHPv8 (a PHP extension that embeds the Google V8 javascript engine) and a custom handler / middleware to perform what you want.
Then again, depending on your environment, it might be easier to simply perform the scraping with a javascript client.
I would recommend to try getting response content. Parse it (if you have to) to new html and use it as $html when initialing new Crawler object, after that you can use all data in response like any other Crawler object.
Since it is impossible to work with javascript, I can suggest another solution:
This will only work for single jobs and not automated processes. In my case this will do it.