Gemfire Persistent Overflow

2019-08-17 06:02发布

问题:

I'm using Gemfire v7.0.1.3 on Linux. Below is my cache xml.

<?xml version.....>
<!DOCTYPE....>
<cache is-server="true">
   <disk-store name="myStore" auto-compact="false" max-oplog-size="1000" queue-size="10000" time-interval="150">
      <disk-dirs>
         <disk-dir>.....</disk-dir>
      </disk-dirs>
   </disk-store>
   <region name="myRegion" refid="PARTITION_PARSISTENT_OVERFLOW">
      <region-attributes disk-store-name="myStore" disk-synchronous="true">
         <eviction-attributes>
            <lru-entry-count maximum="500" action="overflow-to-disk" />
         </eviction-attributes>
      </region-attributes>
   </region>
  </cache>  

Now I start cache server allocating 8GB. When I'm using String as cache key and a custom object (each object has 4 double arrays, each of 10000 size) as value, I can store 500 millions objects in the cache without any issue. I can see the disk store directory having .crf, .krf, .drf files. If I restart the cache, the elements are getting restored, all good stuff. But, if I use the custom object as key and value, I start getting low memory exception after creating 25000 (approx) entries in region. Is it expected behavior? Because Gemfire documentation says when we use persistence and overflow together, all the keys and least recently used values are overflowed to disk and most active entry values are kept in memory. So, I was expecting that, I can store any number of objects in the region as long as I have space available in my disk store. But I'm getting low memory exception. Please help me understand.

Thanks

回答1:

Keys are never overflown to disk, so your memory must be large enough to accommodate all keys. For a persistent region, the keys are also written to disk, but that is only for recovery purpose. So, this behavior is expected if the size of your object keys much larger than the size of your string keys.



标签: gemfire