I am completely new to Hadoop and MapReduce and am trying to work my way through it. I am trying to develop a mapreduce application in python, in which I use data from 2 .CSV files. I am just reading the two files in mapper and then printing the key value pair from the files to the sys.stdout
The program runs fine when I use it on a single machine, but with the Hadoop Streaming, I get an error. I think I am making some mistake in reading files in the mapper on Hadoop. Please help me out with the code, and tell me how to use file-handling in Hadoop Streaming. The mapper.py code is as below. (You can understand the code from the comments):
#!/usr/bin/env python
import sys
from numpy import genfromtxt
def read_input(inVal):
for line in inVal:
# split the line into words
yield line.strip()
def main(separator='\t'):
# input comes from STDIN (standard input)
labels=[]
data=[]
incoming = read_input(sys.stdin)
for vals in incoming:
# write the results to STDOUT (standard output);
# what we output here will be the input for the
# Reduce step, i.e. the input for reducer.py
#
# tab-delimited;
if len(vals) > 10:
data.append(vals)
else:
labels.append(vals)
for i in range(0,len(labels)):
print "%s%s%s\n" % (labels[i], separator, data[i])
if __name__ == "__main__":
main()
There are 60000 records which are entered to this mapper from two .csv files as follows (on single machine, not hadoop cluster):
cat mnist_train_labels.csv mnist_train_data.csv | ./mapper.py