So I am using the PHP Simple HTML DOM Parser to get the contents of a webpage. After I knew what I was doing was right, I still got the error that there was nothing to be found.
So here's what I am using to see if there is anything actually being caught:
<?php
include_once('simple_html_dom.php');
error_reporting(E_ALL);
ini_set('display_errors', '1');
$first_url = "http://www.transfermarkt.co.uk/en/chinese-super-league/startseite/wettbewerb_CSL.html"; // works
$html = file_get_html($first_url);
echo "<textarea>Output\n===========\n $html</textarea><br /><br />";
$second_url = "http://www.transfermarkt.co.uk/en/chinese-super-league/torschuetzen/wettbewerb_CSL.html"; // does not work?
$html = file_get_html($second_url);
echo "<textarea>Output\n===========\n $html</textarea><br />";
?>
No errors. Nothing in the 2nd textarea. The second URL does not seem to be getting scraped bt the tool... why?
simple_php_dom.php
contains:The second page is over 672000 bytes, so this size check fails. Increase that constant and you should be OK.
I tested your question it's working fine. You have to check the php memory limit it's may be the problem
increase your PHP memory limit and try again