I'm looking into making a web crawler/spider but I need someone to point me in the right direction to get started.
Basically, my spider is going to search for audio files and index them.
I'm just wondering if anyone has any ideas for how I should do it. I've heard having it done in PHP would be extremely slow. I know vb.net so could that come in handy?
I was thinking about using Googles filetype search to get links to crawl. Would that be ok?
In VB.NET you will need to get the HTML first, so use the WebClient class or HttpWebRequest and HttpWebResponse classes. There is plenty of info on how to use these on the interweb.
Then you will need to parse the HTML. I recommend using regular expressions for this.
Your idea of using Google for a filetype search is a good one. I did a similar thing a few years ago to gather PDFs to test PDF indexing in SharePoint, which worked really well.
Here is a link on a tutorial on how to write a web crawler in java. http://java.sun.com/developer/technicalArticles/ThirdParty/WebCrawler/ I'm sure if you google it you can find ones for other languages.
The pseudo code should be like:
Method spider(URL startURL){
Collection URLStore; // Can be an arraylist
push(startURL,URLStore);// start with a know url
while URLStore ! Empty do
currURL= pop(URLStore); //take an url
download URL page;
push (URLx, URLStore); //for all links to URL in the page which are not already followed, then put in the list
To read some data from a web page in Java you can do:
URL myURL = new URL("http://www.w3.org");
BufferedReader in = new BufferedReader( new InputStreamReader(myURL.openStream()));
String inputLine;
while ((inputLine = in.readLine()) != null) //you will get all content of the page
System.out.println(inputLine); // here you need to extract the hyperlinks
in.close();