Nutch job failing when sending data to Solr

2019-07-18 08:42发布

问题:

I've been trying various things with no avail. My configuration of Nutch/Solr is based on this:

http://ubuntuforums.org/showthread.php?t=1532230

Now that I have Nutch and Solr up and running, I would like to use Solr to index the crawl data. Nutch successfully crawls the domain I specified but fails when I run the command to communicate that data to Solr. Here's the command:

bin/nutch solrindex http://solr:8181/solr/ crawl/crawldb crawl/linkdb crawl/segments/*

Here's the output:

Indexer: starting at 2013-09-12 10:34:43
Indexer: deleting gone documents: false
Indexer: URL filtering: false
Indexer: URL normalizing: false
Active IndexWriters :
SOLRIndexWriter
solr.server.url : URL of the SOLR instance (mandatory)
solr.commit.size : buffer size when sending to SOLR (default 1000)
solr.mapping.file : name of the mapping file for fields (default solrindex-mapping.xml)
solr.auth : use authentication (default false)
solr.auth.username : use authentication (default false)
solr.auth : username for authentication
solr.auth.password : password for authentication


Indexer: org.apache.hadoop.mapred.InvalidInputException: Input path does not exist:             file:/usr/share/apache-nutch-1.7/crawl/linkdb/crawl_fetch
Input path does not exist: file:/usr/share/apache-nutch-1.7/crawl/linkdb/crawl_parse
Input path does not exist: file:/usr/share/apache-nutch-1.7/crawl/linkdb/parse_data
Input path does not exist: file:/usr/share/apache-nutch-1.7/crawl/linkdb/parse_text
at org.apache.hadoop.mapred.FileInputFormat.listStatus(FileInputFormat.java:197)
at org.apache.hadoop.mapred.SequenceFileInputFormat.listStatus(SequenceFileInputFormat.java:40)
at org.apache.hadoop.mapred.FileInputFormat.getSplits(FileInputFormat.java:208)
at org.apache.hadoop.mapred.JobClient.writeOldSplits(JobClient.java:1081)
at org.apache.hadoop.mapred.JobClient.writeSplits(JobClient.java:1073)
at org.apache.hadoop.mapred.JobClient.access$700(JobClient.java:179)
at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:983)
at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:936)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1190)
at org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:936)
at org.apache.hadoop.mapred.JobClient.submitJob(JobClient.java:910)
at org.apache.hadoop.mapred.JobClient.runJob(JobClient.java:1353)
at org.apache.nutch.indexer.IndexingJob.index(IndexingJob.java:123)
at org.apache.nutch.indexer.IndexingJob.run(IndexingJob.java:185)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
at org.apache.nutch.indexer.IndexingJob.main(IndexingJob.java:195)

I've also tried another command after much Googling:

bin/nutch solrindex http://solr:8181/solr/ crawl/crawldb -linkdb crawl/linkdb crawl/segments/*

With this output:

Indexer: starting at 2013-09-12 10:45:51
Indexer: deleting gone documents: false
Indexer: URL filtering: false
Indexer: URL normalizing: false
Active IndexWriters :
SOLRIndexWriter
solr.server.url : URL of the SOLR instance (mandatory)
solr.commit.size : buffer size when sending to SOLR (default 1000)
solr.mapping.file : name of the mapping file for fields (default solrindex-mapping.xml)
solr.auth : use authentication (default false)
solr.auth.username : use authentication (default false)
solr.auth : username for authentication
solr.auth.password : password for authentication


Indexer: java.io.IOException: Job failed!
at org.apache.hadoop.mapred.JobClient.runJob(JobClient.java:1357)
at org.apache.nutch.indexer.IndexingJob.index(IndexingJob.java:123)
at org.apache.nutch.indexer.IndexingJob.run(IndexingJob.java:185)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
at org.apache.nutch.indexer.IndexingJob.main(IndexingJob.java:195)

Does anyone have any ideas of how to overcome these errors?

回答1:

Was expecting the same error on fresh Solr 5.2.1 and Nutch 1.10:

2015-07-30 20:56:23,015 WARN mapred.LocalJobRunner - job_local_0001 org.apache.solr.common.SolrException: Not Found

Not Found

request: http://127.0.0.1:8983/solr/update?wt=javabin&version=2

So i have created a collection (or core, i am not an expert in SOLR):

bin/solr create -c demo

And changed URL in Nutch indexing script:

bin/nutch solrindex http://127.0.0.1:8983/solr/demo crawl/crawldb -linkdb crawl/linkdb crawl/segments/*

I know that the question is rather old, but maybe i will help somebody with it...



回答2:

Did you see the log in solr that revealed the error reason. I had ever same problem in nutch, and the solr's log showed a message "unknown field 'host'". After I modified the schema.xml for solr, the problem vanished.