I am looking to get the contents of a text file hosted on my website using Python. The server requires JavaScript to be enabled on your browser. Therefore when I run:
import urllib2
target_url = "http://09hannd.me/ai/request.txt"
data = urllib2.urlopen(target_url)
I receive a html page saying to enable JavaScript. I was wondering if there was a way of faking having JS enabled or something.
Thanks
Selenium
is the way to go here, but there is another "hacky" option.Based on this answer: https://stackoverflow.com/a/26393257/2517622
I would probably suggest tools like this. https://github.com/niklasb/dryscrape
Additionally you can see more info here: Using python with selenium to scrape dynamic web pages