So I'm making a script to check the keyword density of a page based off the URL the user submits and I have been using strip_tags but it doesn't seem to be completely filtering the javascript and other code from the actual word content on the site. Is there a better way to filter between the code content on a page and the actual word content?
if(isset($_POST['url'])){
$url = $_POST['url'];
$str = strip_tags(file_get_contents($url));
$words = str_word_count(strtolower($str),1);
$word_count = array_count_values($words);
foreach ($word_count as $key=>$val) {
$density = ($val/count($words))*100;
echo "$key - COUNT: $val, DENSITY: ".number_format($density,2)."%<br/>\n";
}
}
I have written 2 functions for this:
/**
* Removes all Tags provided from an Html string
*
* @param string $str The Html String
* @param string[] $tagArr An Array with all Tag Names to be removed
*
* @return string The Html String without the tags
*/
function removeTags($str, $tagArr)
{
foreach ($tagArr as $tag) {
$str = preg_replace('#<' . $tag . '(.*?)>(.*?)</' . $tag . '>#is', '', $str);
}
return $str;
}
/**
* cleans some html string
*
* @param string $str some html string
*
* @return string the cleaned string
*/
function filterHtml($str)
{
//Remove Tags
$str = removeTags($str, ['script', 'style']);
//Remove all Tags, but not the Content
$str = preg_replace('/<[^>]*>/', ' ', $str);
//Remove Linebreaks and Tabs
$str = str_replace(["\n", "\t", "\r"], ' ', $str);
//Remove Double Whitespace
while (strpos($str, ' ') !== false) {
$str = str_replace(' ', ' ', $str);
}
//Return trimmed
return trim($str);
}
Working Example
$fileContent = file_get_contents('http://stackoverflow.com/questions/25537377/filtering-html-from-site-content-php');
$filteredContent = filterHtml($fileContent);
var_dump($filteredContent);
What you need is to parse the HTML so you have a DOM like structure you can iterate over and access the content of the different nodes.
You can use PHP Simple HTML DOM Parser