What
My web-app is made dynamic through Google's AngularJS.
I want static versions of my pages to be generated.
Why
Web-scrapers like Google's execute and render the JavaScript; but don't treat the content the same way as their static equivalents.
References:
- Does heavy JavaScript use adversely impact Googleability? (Programmers StackExchange)
- Making AJAX Applications Crawlable (Google Documentation for webmasters)
How
Not sure exactly how—which is why I'm asking—but I want to access the same source that your browser's 'inspect element' presents; rather than the source that: Ctrl+U (View page source) shows.
Once I have a script which renders the page; 'spitting' out the HTML+CSS; I will place those 'generated' files on my web-server. A 'cron' job will then be scheduled to regenerate the files at regular intervals.
These static files will subsequently be served instead of the dynamic ones; when JavaScript is disabled and/or when a scraper 'visits' the site.