I am having issue communicating between selenium and scrapy object.
I am using selenium to login to some site, once I get that response I want to use scrape's functionaries to parse and process. Please can some one help me writing middleware so that every request should go through selenium web driver and response should be pass to scrapy.
Thank you!
It's pretty straightforward, create a middleware with a webdriver and use
process_request
to intercept the request, discard it and use the url it had to pass it to your selenium webdriver:The downside of this is that you have to get rid of the concurrency in your spider since selenium webdrive can only handle one url at a time. For that see settings documentation page.