This is sort of a follow-up question to one I asked earlier.
I'm trying to scrape a webpage which I have to login to reach first. But after authentication, the webpage I need requires a little bit of Javascript to be run before you can view the content. What I've done is followed the instructions here to install splash to try to render the Javascript. However...
Before I switched to splash, the authentication with Scrapy's InitSpider
was fine. I was getting through the login page and scraping the target page OK (except without the Javascript working, obviously). But once I add the code to pass the requests through splash, it looks like I'm not parsing the target page.
Spider below. The only difference between the splash version (here) and the non-splash version is the function def start_requests()
. Everything else is the same between the two.
import scrapy
from scrapy.spiders.init import InitSpider
from scrapy.spiders import Rule
from scrapy.linkextractors import LinkExtractor
class BboSpider(InitSpider):
name = "bbo"
allowed_domains = ["bridgebase.com"]
start_urls = [
"http://www.bridgebase.com/myhands/index.php"
]
login_page = "http://www.bridgebase.com/myhands/myhands_login.php?t=%2Fmyhands%2Findex.php%3F"
# authentication
def init_request(self):
return scrapy.http.Request(url=self.login_page, callback=self.login)
def login(self, response):
return scrapy.http.FormRequest.from_response(
response,
formdata={'username': 'USERNAME', 'password': 'PASSWORD'},
callback=self.check_login_response)
def check_login_response(self, response):
if "recent tournaments" in response.body:
self.log("Login successful")
return self.initialized()
else:
self.log("Login failed")
print(response.body)
# pipe the requests through splash so the JS renders
def start_requests(self):
for url in self.start_urls:
yield scrapy.Request(url, self.parse, meta={
'splash': {
'endpoint': 'render.html',
'args': {'wait': 0.5}
}
})
# what to do when a link is encountered
rules = (
Rule(LinkExtractor(), callback='parse_item'),
)
# do nothing on new link for now
def parse_item(self, response):
pass
def parse(self, response):
filename = 'test.html'
with open(filename, 'wb') as f:
f.write(response.body)
What's happening now is that test.html
, the result of parse()
, is now simply the login page itself rather than the page I'm supposed to be redirected to after login.
This is telling in the log--ordinarily, I would see the "Login successful" line from check_login_response()
, but as you can see below it seems like I'm not even getting to that step. Is this because scrapy is now putting the authentication requests through splash too, and that it's getting hung up there? If that's the case, is there any way to bypass splash only for the authentication part?
2016-01-24 14:54:56 [scrapy] INFO: Spider opened
2016-01-24 14:54:56 [scrapy] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2016-01-24 14:54:56 [scrapy] DEBUG: Telnet console listening on 127.0.0.1:6023
2016-01-24 14:55:02 [scrapy] DEBUG: Crawled (200) <POST http://localhost:8050/render.html> (referer: None)
2016-01-24 14:55:02 [scrapy] INFO: Closing spider (finished)
I'm pretty sure I'm not using splash correctly. Can anyone point me to some documentation where I can figure out what's going on?
Update
So, it seems that
start_requests
fires before the login.Here is the code from InitSpider, minus comments.
InitSpider calls the main
start_requests
withinitialized
.Your
start_requests
is a modified version of the base class's method. So maybe something like this will work.You need toreturn self.initialized()
I don't think Splash alone would handle this particular case well.
Here is the working idea:
selenium
andPhantomJS
headless browser to log into the websitePhantomJS
intoScrapy
The code:
Prints
Login successful
and the HTML of the "hands" page.You can get all the data without the need for js at all, there are links available for browsers that do not have javascript enabled, the urls are the same bar
?offset=0
. You just need to parse the queries from the tourney url you are interested in and create a Formrequest.There are numerous links in the output, for hands you get the
tview.php?-t=....
, you can request each one joining tohttp://webutil.bridgebase.com/v2/
and it will give you a table of all the data that is easy to parse, there are also links totourney=4796-1455303720-&username=...
associated with each hand in the tables, a snippet of the output from the tview link:The rest of the parsing I will leave to yourself.