Reducers failing

2019-09-07 03:50发布

We are using 3 cluster machine and mapreduce.tasktracker.reduce.tasks.maximum property is set to 9. When I set no of reducer is equal to or less than 9 job is getting succeeded but if I set greater than 9 then it is failing with the exception "Task attempt_201701270751_0001_r_000000_0 failed to ping TT for 60 seconds. Killing!". Can any one guide me what will be the problem

1条回答
手持菜刀,她持情操
2楼-- · 2019-09-07 04:32

There seem to be some bug in hadoop -0.20.

https://issues.apache.org/jira/browse/MAPREDUCE-1905 (for reference ).

Can you please try to increase the task timeout ?

(mapreduce.task.timeout to a higher value ) ( 0 will disable the timeout )

查看更多
登录 后发表回答