How to select the optimal key in map reduce?

2019-07-10 12:43发布

I am working with stocks transaction log files. Each line denotes a trade transaction with 20 tab separated values. I am using hadoop to process this file and do some benchmarking of trades. Right now for each line I have to perform separate benchmark calculations and hence there is no need for reduce function in map-reduce. In order to perform the benchmark calculation of each line I have to query a Sybase database to obtains some standard values corresponding to that line. The database is indexed on two values of each line [ trade Id and Stock Id]. Now my question is should I use tradeId and StockId as key in my mapreduce program or should I choose other value/[combination of values] for my key.

1条回答
劫难
2楼-- · 2019-07-10 13:16

So, for each line of input, you're going to query a database and then perform benchmark calculations for each line separately. After you finish the benchmark calculations, you are going to output each line with the benchmark value.

In this case, you can either not use a reducer at all, or use an identity reducer.

So your map function will read in a line, then it will fire a query to the Sybase database for the standard values, and then perform benchmark calculations. Since you want to output each line with the benchmark value, you could have the Map function output the line as key and benchmark value as value, i.e <line, benchmark value>

Your map function would look something like this: (I'm assuming your benchmark value is an integer)

public void map(Text key, IntWritable value, Context context) throws Exception {
    String line = value.toString();   //this will be your key in the final output

     /* 
         Perform operations on the line

      */

      /* 

         standard values = <return value from sybase query.>;

      */

      /*Perform benchmark calculations and obtain benchmark values */

      context.write(line,benchmarkValue);     




}
查看更多
登录 后发表回答