I'm using jdbc connection pooling with jetty, had this setup on a server instance with no problems for over a year. I've switched to a new ubuntu server, and keep running out of memory. I'm profiling memory usage and see the following top class instances:
java.util.Hashtable$Entry (33%)
com.mysql.jdbc.ConnectionPropertiesImpl$BooleanConnectionProperty (13%)
com.mysql.jdbc.ConnectionPropertiesImpl$StringConnectionProperty (3%)
com.mysql.jdbc.ConnectionPropertiesImpl$IntegerConnectionProperty (3%)
I'm not explicitly closing my ResultSet instances, I'm doing something like:
Connection conn = null;
PreparedStatement stmt = null;
try {
conn = ...;
stmt = ...;
ResultSet rs = ...;
rs.useIt();
}
finally {
if (stmt != null) { stmt.close(); }
if (conn != null) { conn.close(); }
}
The docs say that the Statement will close associated ResultSet instances when it gets closed - is it possible that the jdbc implementation on this ubuntu machine is not actually doing that? I haven't made any changes in my code (a packaged .war file), I just dropped it into a jetty instance on this ubuntu machine as-is.
The profiling shows those com.mysql.jdbc.ConnectionPropertiesImpl instances continuing to grow, so guessing maybe this is what's happening.
I just wanted to check before I go off and modify all my code to explicitly close the ResultSet instances. Looking at MySql Workbench, I don't see anything out of the ordinary in the Client Connections view - my app connections come in and appear to get cleaned up ok.
Thanks
-------------------- Update --------------------
I've changed all my instances of ResultSet to close them immediately as well as in the finally {} block, just like my Connection and PreparedStatement instances. This doesn't seem to be helping though, memory keeps growing.
Poking around some more, I noticed the class:
com.mysql.jdbc.JDBC4Connection
and the number of instances never decrease. After about 10 minutes of up time, I'm seeing almost 5k instances (yikes?).
This post looks very similar to what I'm running into:
Memory leak in JDBC4Connection
The OP says in the end:
the ORM relied on closing connection to free resources in finalizer (sometimes over closing the result sets and statements), but the pool kept the connections open for several hours, and in case of any peak, this caused OOM.
I don't understand what that means, or how one would go about fixing that. I'm pretty sure I'm closing my resources everywhere now (otherwise when I was running the same webapp on my other windows server, I may have seen a leak there, too?).
-------------------- Update --------------------
Still seeing the same behavior, this is how I'm opening a db connection in each request I handle (error checking omitted):
public class DsHelper {
private static DsHelper sInstance;
private DataSource mDs;
public DsHelper() {
InitialContext ctx = new InitialContext();
mDs = (DataSource)ctx.lookup("java:comp/env/jdbc/myds");
}
public static DsHelper get() {
if (sInstance == null) {
sInstance = new DsHelper();
}
return mInstance;
}
public DataSource getDataSource() {
return mDs;
}
}
using it:
protected void doGet(HttpServletRequest request,
HttpServletResponse response)
{
Connection conn = null;
try {
conn = DsHelper.get().getDataSource();
...
}
finally {
if (conn != null) { conn.close(); }
}
}