I am trying to run a hadoop-streaming python job.
bin/hadoop jar contrib/streaming/hadoop-0.20.1-streaming.jar
-D stream.non.zero.exit.is.failure=true
-input /ixml
-output /oxml
-mapper scripts/mapper.py
-file scripts/mapper.py
-inputreader "StreamXmlRecordReader,begin=channel,end=/channel"
-jobconf mapred.reduce.tasks=0
I made sure mapper.py has all the permissions. It errors out saying
Caused by: java.io.IOException: Cannot run program "mapper.py":
error=2, No such file or directory
at java.lang.ProcessBuilder.start(ProcessBuilder.java:460)
at org.apache.hadoop.streaming.PipeMapRed.configure(PipeMapRed.java:214)
... 19 more
Caused by: java.io.IOException: error=2, No such file or directory
at java.lang.UNIXProcess.forkAndExec(Native Method)
at java.lang.UNIXProcess.(UNIXProcess.java:53)
at java.lang.ProcessImpl.start(ProcessImpl.java:91)
at java.lang.ProcessBuilder.start(ProcessBuilder.java:453)
I tried copying mapper.py to hdfs and give the same hdfs://localhost/mapper.py link, that does not work too! Any thoughts on how to fix this bug?.
I just received the same error when my mapper returns a null or empty string. So I had to do a check for the value:
Your problem most likely is that python executable does not exist on the slaves (where TaskTracker is running). Java will give the same error message.
Install it everywhere where it's used. Un your file you can use shebang as you probably already do:
Make sure that the path after the shebang is the same where python is installed on the TaskTrackers.
One other sneaky thing can cause this. If your line-endings on the script are DOS-style, then your first line (the "shebang line") may look like this to the naked eye:
but its bytes look like this to the kernel when it tries to execute your script:
It's looking for an executable called
"/usr/bin/python\r"
, which it can't find, so it dies with"No such file or directory"
.This bit me today, again, so I had to write it down somewhere on SO.
Looking at the example on the HadoopStreaming wiki page, it seems that you should change
to
since "shipped files go to the working directory". You might also need to specify the python interpreter directly:
File not found error sometimes does not means "File not found", instead it means "Cannot execute this script".
Knowing this I solved problems like this, when you are facing with issues ( no java ) on streaming I suggest you to follow this check list:
python myScript.py
make it executable at start it as./myScript.py
this is the way the streaming will call your script.-verbose
to see what is going into the jar which will be deployed into the container, sometime this help.-file
are not in folders.-mapper folder/script.py
or-reducer folder/script.py
are treat asscript.py
This checklist helped me a lot, I hope can be useful also for you.
Here follows the classic log with the ambiguous error message.
It's true, it cannot run the program.
It's the reason the lie.
Read this:
It's a lie, file does exists if -verbose shows it into the packaging list.
Does your mapper.py have execute permission on it ? If not then you need it.
Hadoop forks and runs the the script before it writes/reads to std so you need to give it execute permission to run.