In my java code, I access an oracle database table with an select statement.
I receive a lot of rows (about 50.000 rows), so the rs.next()
needs some time to process all of the rows.
using ResultSet, the processing of all rows (rs.next) takes about 30 secs
My goal is to speed up this process, so I changed the code and now using a CachedRowSet
:
using CachedRowSet, the processing of all rows takes about 35 secs
I don't understand why the CachedRowSet
is slower than the normal ResultSet
, because the CachedRowSet
retrieves all data at once, while the ResultSet
retrieves the data every time the rs.next
is called.
Here is a part of the code:
try {
stmt = masterCon.prepareStatement(sql);
rs = stmt.executeQuery();
CachedRowSet crset = new CachedRowSetImpl();
crset.populate(rs);
while (rs.next()) {
int countStar = iterRs.getInt("COUNT");
...
}
} finally {
//cleanup
}
Using normal ResultSet you can get more optimization options with RowPrefetch and FetchSize.
Those optimizes the network transport chunks and processing in the while loop, so the rs.next() has always a data to work with.
FetchSize has a default set to 10(Oracle latest versions), but as I know RowPrefetch is not set. Thus means network transport is not optimized at all.
CachedRowSet caches the results in memory i.e. that you don't need the connection anymore. Therefore it it "slower" in the first place.
-> http://download.oracle.com/javase/1,5.0/docs/api/javax/sql/rowset/CachedRowSet.html
What makes you think that
ResultSet
will retrieve the data each timers.next()
is called? It's up to the implementation exactly how it works - and I wouldn't be surprised if it fetches a chunk at a time; quite possibly a fairly large chunk.I suspect you're basically seeing the time it takes to copy all the data into the
CachedRowSet
and then access it all - basically you've got an extra copying operation for no purpose.There is an issue with
CachedRowSet
coupled together with a postgres jdbc driver.CachedRowSet
needs to know the types of the columns so it knows which java objects to create (god knows what else it fetches from DB behind the covers!).It therefor makes more roundtrips to the DB to fetch column metadata. In very high volumes this becomes a real problem. If the DB is on a remote server, this is a real problem as well because of network latency.
We've been using
CachedRowSet
for years and just discovered this. We now implement our ownCachedRowSet
, as we never used any of it's fancy stuff anyway. We dogetString
for all types and convert ourselves as this seems the quickest way.This clearly wasn't an issue with fetch size as postgres driver fetches everything by default.