In Spark sc.newAPIHadoopRDD is reading 2.7 GB data

2019-06-04 00:24发布

I am using spark 1.4 and I am trying to read the data from Hbase by using sc.newAPIHadoopRDD to read 2.7 GB data but there are 5 task are created for this stage and taking 2 t0 3 minutes to process it. Can anyone let me know how to increase the more partitions to read the data fast ?

1条回答
贼婆χ
2楼-- · 2019-06-04 00:56

org.apache.hadoop.hbase.mapreduce.TableInputFormat creates a partition for each region. Your table seems to be split into 5 regions. Pre-splitting your table should increase the number of partitions (have a look here for more information on splitting).

查看更多
登录 后发表回答