High memory usage when using Hibernate

2020-02-23 04:18发布

I code a server side application with java run on linux server. I use hibernate to open session to database, use native sql to query it and always close this session by try, catch, finally.

My server query DB using hibernate with very high frequency.

I already define MaxHeapSize for it is 3000M but it usually use 2.7GB on RAM, it can decrease but slower than increase. Sometime it grow up to 3.6GB memory usage, more than my MaxHeapSize define when start.

When memory used is 3.6GB, i try to dump it with -jmap command and got a heapdump with size of 1.3GB only.

Im using eclipse MAT to analyse it, here is the dominator tree from MAT Dominator tree I think hibernate is the problem, i have so many org.apache.commons.collections.map.AbstractReferenceMap$ReferenceEntry like this. It maybe cant be dispose by garbage collection or can but slow.

How can i fix it?

3条回答
Lonely孤独者°
2楼-- · 2020-02-23 04:36

Thank you Vlad Mihalcea with your link to Hibernate issue, this is bug on hibernate, it fix on version 3.6. I just update my hibernate version 3.3.2 to version 3.6.10, use default value of "hibernate.query.plan_cache_max_soft_references" (2048), "hibernate.query.plan_cache_max_strong_references" (128) and my problem is gone. No more high memory usage.

查看更多
smile是对你的礼貌
3楼-- · 2020-02-23 04:41

Notice that even although the number of object within the queryPlanCache can be configured and limited, it is probably not normal having that much.

In our case we were writing queries in hql similar to this:

hql = String.format("from Entity where msisdn='%s'", msisdn);

This resulted in N different queries going to the queryPlanCache. When we changed this query to:

hql = "from Blacklist where msisnd = :msisdn";
...
query.setParameter("msisdn", msisdn);

the size of queryPlanCache was dramatically reduced from 100Mb to almost 0. This second query is translated into a one single preparedStament resulting just one object inside the cache.

查看更多
放荡不羁爱自由
4楼-- · 2020-02-23 04:45

You have 250k entries in your IN query list. Even a native query will put the database to its knees. Oracle limits the IN query listing to 1000 for performance reasons so you should do the same.

Giving it more RAM is not going to solve the problem, you need to limit your select/updates to batches of at most 1000 entries, by using pagination.

Streaming is an option as well, but, for such a large result set, keyset pagination is usually the best option.

If you can do all the processing in the database, then you will not have to move 250k records from the DB to the app. There's a very good reason why many RDBMS offer advanced procedural languages (e.g. PL/SQL, T-SQL).

查看更多
登录 后发表回答