We are using 3 cluster machine and mapreduce.tasktracker.reduce.tasks.maximum property is set to 9. When I set no of reducer is equal to or less than 9 job is getting succeeded but if I set greater than 9 then it is failing with the exception "Task attempt_201701270751_0001_r_000000_0 failed to ping TT for 60 seconds. Killing!". Can any one guide me what will be the problem
相关问题
- Stop .htaccess redirect with query string
- Spark on Yarn Container Failure
- .htaccess rule, redirecting old unexistent address
- How to deploy a web application Aurelia in an Apac
- Apache Directory Studio not opening
There seem to be some bug in hadoop -0.20.
https://issues.apache.org/jira/browse/MAPREDUCE-1905 (for reference ).
Can you please try to increase the task timeout ?
(mapreduce.task.timeout to a higher value ) ( 0 will disable the timeout )