How do I make a simple crawler in PHP?

2019-01-02 21:45发布

I have a web page with a bunch of links. I want to write a script which would dump all the data contained in those links in a local file.

Has anybody done that with PHP? General guidelines and gotchas would suffice as an answer.

15条回答
我命由我不由天
2楼-- · 2019-01-02 22:11

I created a small class to grab data from the provided url, then extract html elements of your choice. The class makes use of CURL and DOMDocument.

php class:

class crawler {


   public static $timeout = 2;
   public static $agent   = 'Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)';


   public static function http_request($url) {
      $ch = curl_init();
      curl_setopt($ch, CURLOPT_URL,            $url);
      curl_setopt($ch, CURLOPT_USERAGENT,      self::$agent);
      curl_setopt($ch, CURLOPT_CONNECTTIMEOUT, self::$timeout);
      curl_setopt($ch, CURLOPT_TIMEOUT,        self::$timeout);
      curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);
      $response = curl_exec($ch);
      curl_close($ch);
      return $response;
   }


   public static function strip_whitespace($data) {
      $data = preg_replace('/\s+/', ' ', $data);
      return trim($data);
   }


   public static function extract_elements($tag, $data) {
      $response = array();
      $dom      = new DOMDocument;
      @$dom->loadHTML($data);
      foreach ( $dom->getElementsByTagName($tag) as $index => $element ) {
         $response[$index]['text'] = self::strip_whitespace($element->nodeValue);
         foreach ( $element->attributes as $attribute ) {
            $response[$index]['attributes'][strtolower($attribute->nodeName)] = self::strip_whitespace($attribute->nodeValue);
         }
      }
      return $response;
   }


}

example usage:

$data  = crawler::http_request('https://stackoverflow.com/questions/2313107/how-do-i-make-a-simple-crawler-in-php');
$links = crawler::extract_elements('a', $data);
if ( count($links) > 0 ) {
   file_put_contents('links.json', json_encode($links, JSON_PRETTY_PRINT));
}

example response:

[
    {
        "text": "Stack Overflow",
        "attributes": {
            "href": "https:\/\/stackoverflow.com",
            "class": "-logo js-gps-track",
            "data-gps-track": "top_nav.click({is_current:false, location:2, destination:8})"
        }
    },
    {
        "text": "Questions",
        "attributes": {
            "id": "nav-questions",
            "href": "\/questions",
            "class": "-link js-gps-track",
            "data-gps-track": "top_nav.click({is_current:true, location:2, destination:1})"
        }
    },
    {
        "text": "Developer Jobs",
        "attributes": {
            "id": "nav-jobs",
            "href": "\/jobs?med=site-ui&ref=jobs-tab",
            "class": "-link js-gps-track",
            "data-gps-track": "top_nav.click({is_current:false, location:2, destination:6})"
        }
    }
]
查看更多
闹够了就滚
3楼-- · 2019-01-02 22:13

Its worth remembering that when crawling external links (I do appreciate the OP relates to a users own page) you should be aware of robots.txt. I have found the following which will hopefully help http://www.the-art-of-web.com/php/parse-robots/.

查看更多
SAY GOODBYE
4楼-- · 2019-01-02 22:15

Thank you @hobodave.

However I found two weaknesses in your code. Your parsing of the original url to get the "host" segment stops at the first single slash. This presumes that all relative links start in the root directory. This only true sometimes.

original url   :  http://example.com/game/index.html
href in <a> tag:  highscore.html
author's intent:  http://example.com/game/highscore.html  <-200->
crawler result :  http://example.com/highscore.html       <-404->

fix this by breaking at the last single slash not the first

a second unrelated bug, is that $depth does not really track recursion depth, it tracks breadth of the first level of recursion.

If I believed this page were in active use I might debug this second issue, but I suspect the text I am writing now will never be read by anyone, human or robot, since this issue is six years old and I do not even have enough reputation to notify +hobodave directly about these defects by commmenting on his code. Thanks anyway hobodave.

查看更多
欢心
5楼-- · 2019-01-02 22:15

Why use PHP for this, when you can use wget, e.g.

wget -r -l 1 http://www.example.com

For how to parse the contents, see Best Methods to parse HTML and use the search function for examples. How to parse HTML has been answered multiple times before.

查看更多
6楼-- · 2019-01-02 22:17

Here my implementation based on the above example/answer.

  1. It is class based
  2. uses Curl
  3. support HTTP Auth
  4. Skip Url not belonging to the base domain
  5. Return Http header Response Code for each page
  6. Return time for each page

CRAWL CLASS:

class crawler
{
    protected $_url;
    protected $_depth;
    protected $_host;
    protected $_useHttpAuth = false;
    protected $_user;
    protected $_pass;
    protected $_seen = array();
    protected $_filter = array();

    public function __construct($url, $depth = 5)
    {
        $this->_url = $url;
        $this->_depth = $depth;
        $parse = parse_url($url);
        $this->_host = $parse['host'];
    }

    protected function _processAnchors($content, $url, $depth)
    {
        $dom = new DOMDocument('1.0');
        @$dom->loadHTML($content);
        $anchors = $dom->getElementsByTagName('a');

        foreach ($anchors as $element) {
            $href = $element->getAttribute('href');
            if (0 !== strpos($href, 'http')) {
                $path = '/' . ltrim($href, '/');
                if (extension_loaded('http')) {
                    $href = http_build_url($url, array('path' => $path));
                } else {
                    $parts = parse_url($url);
                    $href = $parts['scheme'] . '://';
                    if (isset($parts['user']) && isset($parts['pass'])) {
                        $href .= $parts['user'] . ':' . $parts['pass'] . '@';
                    }
                    $href .= $parts['host'];
                    if (isset($parts['port'])) {
                        $href .= ':' . $parts['port'];
                    }
                    $href .= $path;
                }
            }
            // Crawl only link that belongs to the start domain
            $this->crawl_page($href, $depth - 1);
        }
    }

    protected function _getContent($url)
    {
        $handle = curl_init($url);
        if ($this->_useHttpAuth) {
            curl_setopt($handle, CURLOPT_HTTPAUTH, CURLAUTH_ANY);
            curl_setopt($handle, CURLOPT_USERPWD, $this->_user . ":" . $this->_pass);
        }
        // follows 302 redirect, creates problem wiht authentication
//        curl_setopt($handle, CURLOPT_FOLLOWLOCATION, TRUE);
        // return the content
        curl_setopt($handle, CURLOPT_RETURNTRANSFER, TRUE);

        /* Get the HTML or whatever is linked in $url. */
        $response = curl_exec($handle);
        // response total time
        $time = curl_getinfo($handle, CURLINFO_TOTAL_TIME);
        /* Check for 404 (file not found). */
        $httpCode = curl_getinfo($handle, CURLINFO_HTTP_CODE);

        curl_close($handle);
        return array($response, $httpCode, $time);
    }

    protected function _printResult($url, $depth, $httpcode, $time)
    {
        ob_end_flush();
        $currentDepth = $this->_depth - $depth;
        $count = count($this->_seen);
        echo "N::$count,CODE::$httpcode,TIME::$time,DEPTH::$currentDepth URL::$url <br>";
        ob_start();
        flush();
    }

    protected function isValid($url, $depth)
    {
        if (strpos($url, $this->_host) === false
            || $depth === 0
            || isset($this->_seen[$url])
        ) {
            return false;
        }
        foreach ($this->_filter as $excludePath) {
            if (strpos($url, $excludePath) !== false) {
                return false;
            }
        }
        return true;
    }

    public function crawl_page($url, $depth)
    {
        if (!$this->isValid($url, $depth)) {
            return;
        }
        // add to the seen URL
        $this->_seen[$url] = true;
        // get Content and Return Code
        list($content, $httpcode, $time) = $this->_getContent($url);
        // print Result for current Page
        $this->_printResult($url, $depth, $httpcode, $time);
        // process subPages
        $this->_processAnchors($content, $url, $depth);
    }

    public function setHttpAuth($user, $pass)
    {
        $this->_useHttpAuth = true;
        $this->_user = $user;
        $this->_pass = $pass;
    }

    public function addFilterPath($path)
    {
        $this->_filter[] = $path;
    }

    public function run()
    {
        $this->crawl_page($this->_url, $this->_depth);
    }
}

USAGE:

// USAGE
$startURL = 'http://YOUR_URL/';
$depth = 6;
$username = 'YOURUSER';
$password = 'YOURPASS';
$crawler = new crawler($startURL, $depth);
$crawler->setHttpAuth($username, $password);
// Exclude path with the following structure to be processed 
$crawler->addFilterPath('customer/account/login/referer');
$crawler->run();
查看更多
来,给爷笑一个
7楼-- · 2019-01-02 22:19

As mentioned, there are crawler frameworks all ready for customizing out there, but if what you're doing is as simple as you mentioned, you could make it from scratch pretty easily.

Scraping the links: http://www.phpro.org/examples/Get-Links-With-DOM.html

Dumping results to a file: http://www.tizag.com/phpT/filewrite.php

查看更多
登录 后发表回答