kafka 8 and memory - There is insufficient memory

2019-03-10 22:49发布

I am using DigiOcean instance with 512 megs of ram, I get the below error with kafka. I am not a java proficient dev. How do I adjust kafka to utilize the small amount of ram. This is a dev sever. I dont want to pay more per hour for a bigger machine.

#
# There is insufficient memory for the Java Runtime Environment to continue.
# Native memory allocation (malloc) failed to allocate 986513408 bytes for committing reserved memory.
# An error report file with more information is saved as:
# //hs_err_pid6500.log
OpenJDK 64-Bit Server VM warning: INFO: os::commit_memory(0x00000000bad30000, 986513408, 0) failed; error='Cannot allocate memory' (errno=12)

2条回答
做自己的国王
2楼-- · 2019-03-10 23:13

You can adjust the JVM heap size by editing kafka-server-start.sh, zookeeper-server-start.shand so on:

export KAFKA_HEAP_OPTS="-Xmx1G -Xms1G"

The -Xms parameter specifies the minimum heap size. To get your server to at least start up, try changing it to use less memory. Given that you only have 512M, you should change the maximum heap size (-Xmx) too:

export KAFKA_HEAP_OPTS="-Xmx256M -Xms128M"

I'm not sure what the minimal memory requirements of kafka in default config are - maybe you need to adjust the message size in kafka to get it to run.

查看更多
走好不送
3楼-- · 2019-03-10 23:13

Area: HotSpot / gc

Synopsis

Crashes due to failure to allocate large pages.

On Linux, failures when allocating large pages can lead to crashes. When running JDK 7u51 or later versions, the issue can be recognized in two ways:

    Before the crash happens, one or more lines similar to the following example will have been printed to the log:

    os::commit_memory(0x00000006b1600000, 352321536, 2097152, 0) failed;
    error='Cannot allocate memory' (errno=12); Cannot allocate large pages, 
    falling back to regular pages

    If a file named hs_err is generated, it will contain a line similar to the following example:

    Large page allocation failures have occurred 3 times

The problem can be avoided by running with large page support turned off, for example, by passing the "-XX:-UseLargePages" option to the java binary.
查看更多
登录 后发表回答