I am trying to insert into a hive table with dynamic partitions. The same query has been running fine for last few days, but is giving the below error now.
Diagnostic Messages for this Task: java.lang.RuntimeException:
org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime Error:
Unable to deserialize reduce input key from
x1x128x0x0x46x234x240x192x148x1x68x69x86x50x0x1x128x0x104x118x1x128x0x0x46x234x240x192x148x1x128x0x0x25x1x128x0x0x46x1x128x0x0x72x1x127x255x255x255x0x0x0x0x1x71x66x80x0x255
with properties
{columns=reducesinkkey0,reducesinkkey1,reducesinkkey2,reducesinkkey3,reducesinkkey4,reducesinkkey5,reducesinkkey6,reducesinkkey7,reducesinkkey8,reducesinkkey9,reducesinkkey10,reducesinkkey11,reducesinkkey12,
serialization.lib=org.apache.hadoop.hive.serde2.binarysortable.BinarySortableSerDe,
serialization.sort.order=+++++++++++++,
columns.types=bigint,string,int,bigint,int,int,int,string,int,string,string,string,string}
at org.apache.hadoop.hive.ql.exec.mr.ExecReducer.reduce(ExecReducer.java:283)
at org.apache.hadoop.mapred.ReduceTask.runOldReducer(ReduceTask.java:506)
at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:447)
at org.apache.hadoop.mapred.Child$4.run(Child.java
FAILED: Execution Error, return code 2 from
org.apache.hadoop.hive.ql.exec.mr.MapRedTask MapReduce Jobs Launched:
Stage-Stage-1: Map: 1 Reduce: 1 Cumulative CPU: 3.33 sec HDFS
Read: 889 HDFS Write: 314 SUCCESS Stage-Stage-2: Map: 1 Reduce: 1
Cumulative CPU: 1.42 sec HDFS Read: 675 HDFS Write: 0 FAIL
When I use the below setting, the query runs fine
set hive.optimize.sort.dynamic.partition=false
when I set this value to true, it gives the same error.
The Source Table is stored in Sequence Format and Destination Table is stored in RC Format. Can anyone explain what difference does this setting makes internally?