Trouble running RecommenderJob on hadoop

2019-07-25 10:29发布

After adding lined-sinple-sorted.txt and users.txt in input directory of hdfs.

I am trying to run the following command.

hduser@ubuntu:/usr/local/hadoop$ bin/hadoop jar /opt/mahout/core/target/mahout-core-0.7-SNAPSHOT-job.jar org.apache.mahout.cf.taste.hadoop.item.RecommenderJob -Dmapred.input.dir=input/input.txt -Dmapred.output.dir=output --similarityClassname SIMILARITY_PEARSON_CORRELATION --usersFile input/users.txt --booleanData

then i got the following error:

12/03/02 06:17:06 INFO common.AbstractJob: Command line arguments: {--booleanData=[false], --endPhase=[2147483647], --maxPrefsPerUser=[10], --maxPrefsPerUserInItemSimilarity=[1000], --maxSimilaritiesPerItem=[100], --minPrefsPerUser=[1], --numRecommendations=[10], --similarityClassname=[SIMILARITY_PEARSON_CORRELATION], --startPhase=[0], --tempDir=[temp], --usersFile=[input/users.txt]}
12/03/02 06:17:06 INFO common.AbstractJob: Command line arguments: {--booleanData=[false], --endPhase=[2147483647], --input=[input/input.txt], --maxPrefsPerUser=[1000], --minPrefsPerUser=[1], --output=[temp/preparePreferenceMatrix], --ratingShift=[0.0], --startPhase=[0], --tempDir=[temp]}
12/03/02 06:17:07 INFO input.FileInputFormat: Total input paths to process : 1
12/03/02 06:17:08 INFO mapred.JobClient: Running job: job_201203020113_0018
12/03/02 06:17:09 INFO mapred.JobClient:  map 0% reduce 0%
12/03/02 06:17:23 INFO mapred.JobClient: Task Id : attempt_201203020113_0018_m_000000_0, Status : FAILED
java.lang.ArrayIndexOutOfBoundsException: 1

    at org.apache.mahout.cf.taste.hadoop.item.ItemIDIndexMapper.map(ItemIDIndexMapper.java:47)
    at org.apache.mahout.cf.taste.hadoop.item.ItemIDIndexMapper.map(ItemIDIndexMapper.java:31)
    at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:144)
    at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:621)
    at org.apache.hadoop.mapred.MapTask.run(MapTask.java:305)
    at org.apache.hadoop.mapred.Child.main(Child.java:170)

12/03/02 06:17:29 INFO mapred.JobClient: Task Id : attempt_201203020113_0018_m_000000_1, Status : FAILED
java.lang.ArrayIndexOutOfBoundsException: 1

    at org.apache.mahout.cf.taste.hadoop.item.ItemIDIndexMapper.map(ItemIDIndexMapper.java:47)
    at org.apache.mahout.cf.taste.hadoop.item.ItemIDIndexMapper.map(ItemIDIndexMapper.java:31)
    at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:144)
    at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:621)
    at org.apache.hadoop.mapred.MapTask.run(MapTask.java:305)
    at org.apache.hadoop.mapred.Child.main(Child.java:170)

12/03/02 06:17:35 INFO mapred.JobClient: Task Id : attempt_201203020113_0018_m_000000_2, Status : FAILED
java.lang.ArrayIndexOutOfBoundsException: 1

    at org.apache.mahout.cf.taste.hadoop.item.ItemIDIndexMapper.map(ItemIDIndexMapper.java:47)
    at org.apache.mahout.cf.taste.hadoop.item.ItemIDIndexMapper.map(ItemIDIndexMapper.java:31)
    at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:144)
    at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:621)
    at org.apache.hadoop.mapred.MapTask.run(MapTask.java:305)
    at org.apache.hadoop.mapred.Child.main(Child.java:170)

12/03/02 06:17:44 INFO mapred.JobClient: Job complete: job_201203020113_0018
12/03/02 06:17:44 INFO mapred.JobClient: Counters: 3
12/03/02 06:17:44 INFO mapred.JobClient:   Job Counters
12/03/02 06:17:44 INFO mapred.JobClient:     Launched map tasks=4
12/03/02 06:17:44 INFO mapred.JobClient:     Data-local map tasks=4
12/03/02 06:17:44 INFO mapred.JobClient:     Failed map tasks=1
Exception in thread "main" java.io.IOException: Cannot open filename /user/hduser/temp/preparePreferenceMatrix/numUsers.bin
    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.openInfo(DFSClient.java:1497)
    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.<init>(DFSClient.java:1488)
    at org.apache.hadoop.hdfs.DFSClient.open(DFSClient.java:376)
    at org.apache.hadoop.hdfs.DistributedFileSystem.open(DistributedFileSystem.java:178)
    at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:356)
    at org.apache.mahout.common.HadoopUtil.readInt(HadoopUtil.java:267)
    at org.apache.mahout.cf.taste.hadoop.item.RecommenderJob.run(RecommenderJob.java:162)
    at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
    at org.apache.mahout.cf.taste.hadoop.item.RecommenderJob.main(RecommenderJob.java:293)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:616)
    at org.apache.hadoop.util.RunJar.main(RunJar.java:156)

what i have to do to come out from this error?(is it possible then write command)

Your help will be appreciated.

4条回答
老娘就宠你
2楼-- · 2019-07-25 10:51

I also had this issue, had correct CSV formatting and was puzzled for a while.

In the end the problem was that I had a blank line hiding at the end of the file which I wasn't noticing.

Hope that saves someone else's blood pressure a little.

查看更多
我想做一个坏孩纸
3楼-- · 2019-07-25 10:59

Your input is malformed. It needs to be tab or comma separated.

查看更多
甜甜的少女心
4楼-- · 2019-07-25 11:03

I had the same issue. I was trying to run example from http://girlincomputerscience.blogspot.in/2010/11/apache-mahout.html

It was an input file format issue. Some invisible characters get copied.

Open text editor and copy the input file and remove all the invisible characters and save it again.

bin/hadoop fs -put input.txt input/input.txt

The file should be tsv or csv

查看更多
太酷不给撩
5楼-- · 2019-07-25 11:09

I had the same problem, this is how I got it to work.

I first tried replacing ": " by ",":

sed -i 's/: /,/' links-simple-sorted.txt

since that didn' work, I looked up the documentation, and it seems the file has to be reformatted: Each line has to be replaced by 1+x lines, each starting with the user name, followed by one link per line.:

awk -F, -v OFS="," '{ user = $1; split($2, links, " "); for (link in links) { print user,links[link]; } }' links-simple-sorted.txt > input.txt

then I uploaded the new file:

bin/hadoop fs -put input.txt input/input.txt

Now the example is running with the above command bin/hadoop jar ....

查看更多
登录 后发表回答