I am using C# HttpWebRequest to get some data of a webpage. The problem is that some of the data is updated using javascript/ajax after the page is loaded and I am not getting it in the response string. Is there a way to have the webrequest wait untill all the scripts in the page have finished executing?
Thanks
Amit
If I correctly interpret your question, there is no simple solution for your problem.
You are scraping the HTML from a server and since your C# code is not a real web browser, it doesn't execute client scripts.
This way you can't access information which the HTML you fetch doesn't contain.
Edit: I don't know how complex these AJAX calls from the original web site are, but you could use Firebug or Fiddler for IE to see how the requests are made in order to call these AJAX calls in your C# application too. So you could add the pieces of information you'll need. But it's only a theoretical solution.
Just an idea but there is a way to have .net load a webpage as if it were in a browser: using System.Windows.Forms
you could Load the webpage into a WebBrowser control
This will probably give you the pre ajax DOM but maybe there is a way to let it run the ajax first.
When you open a web page in a web browser, it is the browser that executes the javascript and downloads additional resources used by the page (images, scripts, etc). HttpWebRequest by itself will not do any of this, it will only download the html for the page you requested. It will never execute any of the javascript/ajax code on it's own.
Use HttpWebRequest to download the page, programatically search the source code for the relevant ajax information and then use a new HttpWebRequest to pull that data down.
You could use of the PhantomJs. I had this Issue, but don't found solution for my problem. In my opinion, best solution is This.
My solution is look like this:
Use
HttpWebRequest
to download the page. Search the source code for the relevant AJAX information and then use a newHttpWebRequest
to pull that data down.