How do I make my AJAX content crawlable by Google?

2019-06-09 00:59发布

问题:

I've been working on a site that uses jQuery heavily and loads in content via AJAX like so:

$('#newPageWrapper').load(newPath + ' .pageWrapper', function() {
    //on load logic
}

It has now come to my attention that Google won't index any dynamically loaded content via Javascript and so I've been looking for a solution to the problem.

I've read through Google's Making AJAX Applications Crawlable document what seems like 100 times and I still don't understand how to implement it (due in the most part to my limited knowledge of servers).

So my first question would be:

  • Is there a decent step-by-step tutorial out there that documents this from start to finish that you know of? I've tried to Google it and I'm not finding anything useful.

And secondly, if there isn't anything out there yet, would anyone be able to explain:

  1. How to 'Set up my server to handle requests for URLs that contain _escaped_fragment_'

  2. How to implement HtmlUnit on my server to create an 'HTML snapshot' of the page to show to the crawler.

I would be incredibly grateful if someone could shed some light on this for me, thanks in advance!

-Ben

回答1:

The best solution is to make a site that works with and without JavaScript. Read articles on Progressive enhancement.



回答2:

I couldn't find an alternative so I took epascarello's advice and now I'm generating the content with php if the URL includes '_escaped_fragment_' (the URL will include that if a crawler visits)

For those searching:

<?php

    if(isset($_GET['_escaped_fragment_'])){

        $newID = $_GET['_escaped_fragment_'];

        //Generate page here
    }

?>


回答3:

These days this problem is typically solved by using a service that plugs an implementation of Google's scheme for Making AJAX Applications Crawlable in at web server level. You don't have to do it yourself any more.

I work for one of these companies: https://ajaxsnapshots.com (there are others)