CachedRowSet slower than ResultSet?

2019-05-10 21:20发布

In my java code, I access an oracle database table with an select statement. I receive a lot of rows (about 50.000 rows), so the rs.next() needs some time to process all of the rows.

using ResultSet, the processing of all rows (rs.next) takes about 30 secs

My goal is to speed up this process, so I changed the code and now using a CachedRowSet:

using CachedRowSet, the processing of all rows takes about 35 secs

I don't understand why the CachedRowSet is slower than the normal ResultSet, because the CachedRowSet retrieves all data at once, while the ResultSet retrieves the data every time the rs.next is called.

Here is a part of the code:

try {
    stmt = masterCon.prepareStatement(sql);
    rs = stmt.executeQuery();

    CachedRowSet crset = new CachedRowSetImpl();
    crset.populate(rs);

    while (rs.next()) {
        int countStar = iterRs.getInt("COUNT");
        ...
    }
} finally {
    //cleanup
}

4条回答
别忘想泡老子
2楼-- · 2019-05-10 21:27

Using normal ResultSet you can get more optimization options with RowPrefetch and FetchSize.

Those optimizes the network transport chunks and processing in the while loop, so the rs.next() has always a data to work with.

FetchSize has a default set to 10(Oracle latest versions), but as I know RowPrefetch is not set. Thus means network transport is not optimized at all.

查看更多
聊天终结者
3楼-- · 2019-05-10 21:30

CachedRowSet caches the results in memory i.e. that you don't need the connection anymore. Therefore it it "slower" in the first place.

A CachedRowSet object is a container for rows of data that caches its rows in memory, which makes it possible to operate without always being connected to its data source.

-> http://download.oracle.com/javase/1,5.0/docs/api/javax/sql/rowset/CachedRowSet.html

查看更多
女痞
4楼-- · 2019-05-10 21:42

What makes you think that ResultSet will retrieve the data each time rs.next() is called? It's up to the implementation exactly how it works - and I wouldn't be surprised if it fetches a chunk at a time; quite possibly a fairly large chunk.

I suspect you're basically seeing the time it takes to copy all the data into the CachedRowSet and then access it all - basically you've got an extra copying operation for no purpose.

查看更多
Bombasti
5楼-- · 2019-05-10 21:43

There is an issue with CachedRowSet coupled together with a postgres jdbc driver.

CachedRowSet needs to know the types of the columns so it knows which java objects to create (god knows what else it fetches from DB behind the covers!).

It therefor makes more roundtrips to the DB to fetch column metadata. In very high volumes this becomes a real problem. If the DB is on a remote server, this is a real problem as well because of network latency.

We've been using CachedRowSet for years and just discovered this. We now implement our own CachedRowSet, as we never used any of it's fancy stuff anyway. We do getString for all types and convert ourselves as this seems the quickest way.

This clearly wasn't an issue with fetch size as postgres driver fetches everything by default.

查看更多
登录 后发表回答