hadoop cannot allocate memory java.io.IOException:

2019-08-26 01:24发布

问题:

i am getting the following error on hadoop greenplum

java.lang.Throwable: Child Error
    at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:271)
Caused by: java.io.IOException: Cannot run program "ln": java.io.IOException: error=12, Cannot allocate memory
    at java.lang.ProcessBuilder.start(ProcessBuilder.java:488)
    at java.lang.Runtime.exec(Runtime.java:610)
    at java.lang.Runtime.exec(Runtime.java:448)
    at java.lang.Runtime.exec(Runtime.java:386)
    at org.apache.hadoop.fs.FileUtil.symLink(FileUtil.java:567)
    at org.apache.hadoop.mapred.TaskLog.createTaskAttemptLogDir(TaskLog.java:109)
    at org.apache.hadoop.mapred.DefaultTaskController.createLogDir(DefaultTaskController.java:71)
    at org.apache.hadoop.mapred.TaskRunner.prepareLogFiles(TaskRunner.java:316)
    at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:228)
Caused by: java.io.IOException: java.io.IOException: error=12, Cannot allocate memory
    at java.lang.UNIXProcess.<init>(UNIXProcess.java:164)
    at java.lang.ProcessImpl.start(ProcessImpl.java:81)
    at java.lang.ProcessBuilder.start(ProcessBuilder.java:470)
    ... 8 more

the server has 7G ram and 1G swap.

heap size is 1024m and mapred.child.opts is set to 512m.

any ideas?

回答1:

reduced tasktracker memory to 256M and limited the number of tasktrackers to 1 per node, anything higher causes child errors and takes more time for mapreduce job to run.



回答2:

Whatever memory arrangement you come up with, Hadoop is likely to throw this anyway. The problem is that for simple file system tasks like creating symbolic links or checking for available disk space, Hadoop forks a process from the TaskTracker. That process will have as much memory allocated to it as its parent has.

Typical ways to prevent this problem are to leave as much physical memory unallocated as allocated to the TT, adding some swap to the host for these kind of tasks, or allowing "over commit".