So I'm trying to figure out how to install and setup Storm/Stormcrawler with ES and Kibana as described here.
I never installed Storm on my local machine because I've worked with Nutch before and I never had to install Hadoop locally... thought it might be the same with Storm(maybe not?).
I'd like to start crawling with Stormcrawler instead of Nutch now.
It seems that if I just download a release and add the /bin to my PATH, I can only talk to a remote cluster.
It seems like I need to setup a development environment according to this, to give me the ability to develop different topologies over time and then just talk to the remote cluster from my local machine when ready to deploy the new topologies. Is that right?
So it seems like all I need to do is add Storm as a dependency to my Stormcrawler project when I build it with Maven?