I am running Hadoop 2.4.0, Oozie 4.0.0, Hive 0.13.0 from Hortonworks distro.
I have multiple Oozie coordinator jobs that can potentially launch workflows all around the same time. The coordinator jobs each watch different directories and when the _SUCCESS files show up in those directories, the workflow would be launched.
The workflow runs a Hive action that reads from external directory and copy stuff.
SET hive.exec.dynamic.partition=true;
SET hive.exec.dynamic.partition.mode=nonstrict;
DROP TABLE IF EXISTS ${INPUT_TABLE};
CREATE external TABLE IF NOT EXISTS ${INPUT_TABLE} (
id bigint,
data string,
creationdate timestamp,
datelastupdated timestamp)
LOCATION '${INPUT_LOCATION}';
-- Read from external table and insert into a partitioned Hive table
FROM ${INPUT_TABLE} ent
INSERT OVERWRITE TABLE mytable PARTITION(data)
SELECT ent.id, ent.data, ent.creationdate, ent.datelastupdated;
When I run only one coordinator to launch one workflow, the workflow and hive actions are completing successfully without any problems.
When multiple workflows are launched around the same time, the hive action stays in RUNNING for a long time.
If I look at the job syslogs, I see this:
2015-02-18 17:18:26,048 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl: task_1423085109915_0223_m_000000 Task Transitioned from SCHEDULED to RUNNING
2015-02-18 17:18:26,586 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: getResources() for application_1423085109915_0223: ask=3 release= 0 newContainers=0 finishedContainers=0 resourcelimit=<memory:32768, vCores:-3> knownNMs=1
2015-02-18 17:18:27,677 INFO [Socket Reader #1 for port 38704] SecurityLogger.org.apache.hadoop.ipc.Server: Auth successful for job_1423085109915_0223 (auth:SIMPLE)
2015-02-18 17:18:27,696 INFO [IPC Server handler 0 on 38704] org.apache.hadoop.mapred.TaskAttemptListenerImpl: JVM with ID : jvm_1423085109915_0223_m_000002 asked for a task
2015-02-18 17:18:27,697 INFO [IPC Server handler 0 on 38704] org.apache.hadoop.mapred.TaskAttemptListenerImpl: JVM with ID: jvm_1423085109915_0223_m_000002 given task: attempt_1423085109915_0223_m_000000_0
2015-02-18 17:18:34,951 INFO [IPC Server handler 2 on 38704] org.apache.hadoop.mapred.TaskAttemptListenerImpl: Progress of TaskAttempt attempt_1423085109915_0223_m_000000_0 is : 1.0
2015-02-18 17:19:05,060 INFO [IPC Server handler 11 on 38704] org.apache.hadoop.mapred.TaskAttemptListenerImpl: Progress of TaskAttempt attempt_1423085109915_0223_m_000000_0 is : 1.0
2015-02-18 17:19:35,161 INFO [IPC Server handler 28 on 38704] org.apache.hadoop.mapred.TaskAttemptListenerImpl: Progress of TaskAttempt attempt_1423085109915_0223_m_000000_0 is : 1.0
2015-02-18 17:20:05,262 INFO [IPC Server handler 2 on 38704] org.apache.hadoop.mapred.TaskAttemptListenerImpl: Progress of TaskAttempt attempt_1423085109915_0223_m_000000_0 is : 1.0
2015-02-18 17:20:35,358 INFO [IPC Server handler 11 on 38704] org.apache.hadoop.mapred.TaskAttemptListenerImpl: Progress of TaskAttempt attempt_1423085109915_0223_m_000000_0 is : 1.0
2015-02-18 17:21:02,452 INFO [IPC Server handler 23 on 38704] org.apache.hadoop.mapred.TaskAttemptListenerImpl: Progress of TaskAttempt attempt_1423085109915_0223_m_000000_0 is : 1.0
2015-02-18 17:21:32,545 INFO [IPC Server handler 1 on 38704] org.apache.hadoop.mapred.TaskAttemptListenerImpl: Progress of TaskAttempt attempt_1423085109915_0223_m_000000_0 is : 1.0
2015-02-18 17:22:02,668 INFO [IPC Server handler 12 on 38704] org.apache.hadoop.mapred.TaskAttemptListenerImpl: Progress of TaskAttempt attempt_1423085109915_0223_m_000000_0 is : 1.0
It just kept printing the "Progress of TaskAttempt" over and over.
Our yarn-site.xml is configured to use this:
<property>
<name>yarn.resourcemanager.scheduler.class</name>
<value>org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler</value>
</property>
Should I be using a different scheduler instead?
At this point I am not sure if the issue is in Oozie or Hive.