I am trying to run a basic crawl as per the NutchTutorial:
bin/nutch crawl urls -dir crawl -depth 3 -topN 5
So I have Nutch all installed and set up with Solr. I set my $JAVA_HOME in my .bashrc
to /usr/lib/jvm/java-1.6.0-openjdk-amd64
.
I don't see any problems when I run bin/nutch
from the nutch home directory, but when I try to run the crawl as above I get the following error:
log4j:ERROR setFile(null,true) call failed.
java.io.FileNotFoundException: /usr/share/nutch/logs/hadoop.log (Permission denied)
at java.io.FileOutputStream.openAppend(Native Method)
at java.io.FileOutputStream.<init>(FileOutputStream.java:207)
at java.io.FileOutputStream.<init>(FileOutputStream.java:131)
at org.apache.log4j.FileAppender.setFile(FileAppender.java:290)
at org.apache.log4j.FileAppender.activateOptions(FileAppender.java:164)
at org.apache.log4j.DailyRollingFileAppender.activateOptions(DailyRollingFileAppender.java:216)
at org.apache.log4j.config.PropertySetter.activate(PropertySetter.java:257)
at org.apache.log4j.config.PropertySetter.setProperties(PropertySetter.java:133)
at org.apache.log4j.config.PropertySetter.setProperties(PropertySetter.java:97)
at org.apache.log4j.PropertyConfigurator.parseAppender(PropertyConfigurator.java:689)
at org.apache.log4j.PropertyConfigurator.parseCategory(PropertyConfigurator.java:647)
at org.apache.log4j.PropertyConfigurator.configureRootCategory(PropertyConfigurator.java:544)
at org.apache.log4j.PropertyConfigurator.doConfigure(PropertyConfigurator.java:440)
at org.apache.log4j.PropertyConfigurator.doConfigure(PropertyConfigurator.java:476)
at org.apache.log4j.helpers.OptionConverter.selectAndConfigure(OptionConverter.java:471)
at org.apache.log4j.LogManager.<clinit>(LogManager.java:125)
at org.slf4j.impl.Log4jLoggerFactory.getLogger(Log4jLoggerFactory.java:73)
at org.slf4j.LoggerFactory.getLogger(LoggerFactory.java:270)
at org.slf4j.LoggerFactory.getLogger(LoggerFactory.java:281)
at org.apache.nutch.crawl.Crawl.<clinit>(Crawl.java:43)
log4j:ERROR Either File or DatePattern options are not set for appender [DRFA].
solrUrl is not set, indexing will be skipped...
crawl started in: crawl
rootUrlDir = urls
threads = 10
depth = 3
solrUrl=null
topN = 5
Injector: starting at 2013-06-28 16:24:53
Injector: crawlDb: crawl/crawldb
Injector: urlDir: urls
Injector: Converting injected urls to crawl db entries.
Injector: total number of urls rejected by filters: 0
Injector: total number of urls injected after normalization and filtering: 1
Injector: Merging injected urls into crawl db.
Exception in thread "main" java.io.IOException: Job failed!
at org.apache.hadoop.mapred.JobClient.runJob(JobClient.java:1357)
at org.apache.nutch.crawl.Injector.inject(Injector.java:296)
at org.apache.nutch.crawl.Crawl.run(Crawl.java:132)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
at org.apache.nutch.crawl.Crawl.main(Crawl.java:55)
I suspect it might have something to do with file permissions as I have to run sudo
on almost everything on this server, but if I run the same crawl command with sudo
I get:
Error: JAVA_HOME is not set.
So I feel like I've got a catch-22 situation going on here. Should I be able to run this command with sudo
, or is there something else I need to do such that I don't have to run it with sudo
and it will work, or is there something else entirely going on here?
The key to solving this issue is to add the
JAVA_HOME
variable to yoursudo
environment. For example, typeenv
andsudo env
and you will see thatJAVA_HOME
is not set forsudo
. To remedy this, you will need to add the path.sudo visudo
to edit your/etc/sudoers
file. (Do not use a standard text editor. This special vi text editor that will validate syntax before allowing you to save it.)Add this line:
at the end of the
Defaults env_keep
section.It seems that, as a normal user, you don't have permission to write to
/usr/share/nutch/logs/hadoop.log
, which makes sense as security feature.To get around this, create a simple bash script:
Save it as
nutch.sh
, then run it withsudo
: