I am thinking of developing a web search engine using Erlang, Mnesia & Yaws. Is it possible to make a powerful and the fastest web search engine using these software? What will it need to accomplish this and how what do I start with?
相关问题
- Is “new” in Erlang part of the official standard a
- how to create a keep-alive process in Erlang
- Google Custom Search Engine not giving the expecte
- ejabberd and Erlang installation with lager_transf
- Encrypt (cryptojs) - Decrypt (erlang)
相关文章
- Scrapy - Select specific link based on text
- How do I modify a record in erlang?
- Check active timers in Erlang
- undefined function maps:to_json/1
- How to convert datetime() to timestamp() in Erlang
- Importing URLs for JSOUP to Scrape via Spreadsheet
- Display table view when searchBar (from searchCont
- what good orm api will work well with scala or erl
Erlang can make the most powerful web crawler today. Let me take you through my simple crawler.
Step 1. I create a simple parallelism module, which i call mapreduce
Step 2. The HTTP Client
So you can spawn many processes. You remember to escape the URL as well as the output file path as you execute that command. There is a process on the other hand whose work is to watch the directory of downloaded pages. These pages it reads and parses them, it may then delete after parsing or save in a different location or even better, archive them using theOne would normally use either
inets httpc module
built into erlang oribrowse
. However, for memory management and speed (getting the memory foot print as low as possible), a good erlang programmer would choose to usecurl
. By applying theos:cmd/1
which takes that curl command line, one would get the output direct into the erlang calling function. Yet still, its better, to make curl throw its outputs into files and then our application has another thread (process) which reads and parses these fileszip module
Step 3. The html parser.
Better use this
mochiweb's html parser and XPATH
. This will help you parse and get all your favorite HTML tags, extract the contents and then good to go. The examples below, i focused on only theKeywords
,description
andtitle
in the markupModule Testing in shell...awesome results!!!
You can now realise that, we can index the pages against their keywords, plus a good schedule of page revisists. Another challenge was how to make a crawler (something that will move around the entire web, from domain to domain), but that one is easy. Its possible by parsing an Html file for the href tags. Make the HTML Parser to extract all href tags and then you might need some regular expressions here and there to get the links right under a given domain.
Storage: Is one of the most important concepts for a search engine. Its a big mistake to store search engine data in an RDBMS like MySQL, Oracle, MS SQL e.t.c. Such systems are completely complex and the applications that interface with them employ heuristic algorithms. This brings us to Key-Value Stores, of which the two of my best areRunning the crawler
Couch Base Server
andRiak
. These are great Cloud File Systems. Another important parameter is caching. Caching is attained using sayMemcached
, of which the other two storage systems mentioned above have support for it. Storage systems for Search engines ought to beschemaless DBMS
,which focuses onAvailability rather than Consistency
. Read more on Search Engines from here: http://en.wikipedia.org/wiki/Web_search_engineAs far as I know Powerset's natural language procesing search engine is developed using erlang.
Did you look at couchdb (which is written in erlang as well) as a possible tool to help you to solve few problems on your way?
In the 'rdbms' contrib, there is an implementation of the Porter Stemming Algorithm. It was never integrated into 'rdbms', so it's basically just sitting out there. We have used it internally, and it worked quite well, at least for datasets that weren't huge (I haven't tested it on huge data volumes).
The relevant modules are:
Then there is, of course, the Disco Map-Reduce framework.
Whether or not you can make the fastest engine out there, I couldn't say. Is there a market for a faster search engine? I've never had problems with the speed of e.g. Google. But a search facility that increased my chances of finding good answers to my questions would interest me.
I would recommend CouchDB instead of Mnesia.
YAWS is pretty good. You should also consider MochiWeb.
You won't go wrong with Erlang