We have a web application running in a production enviroment and at some point the client complained about how slow the application got.
When we checked what was going on with the application and the database we discover this "precious" query that was being executed by several users at the same time (thus inflicting an extremely high load on the database server):
SELECT NULL AS table_cat,
o.owner AS table_schem,
o.object_name AS table_name,
o.object_type AS table_type,
NULL AS remarks
FROM all_objects o
WHERE o.owner LIKE :1 ESCAPE :"SYS_B_0" AND
o.object_name LIKE :2 ESCAPE :"SYS_B_1" AND
o.object_type IN(:"SYS_B_2", :"SYS_B_3")
ORDER BY table_type, table_schem, table_name
Our application does not execute this query, I believe it is an Hibernate internal query. I've found little information on why Hibernate does this extremely heavy query, so any help in how to avoid it very much appreciated!
The production enviroment information: Red Hat Enterprise Linux 5.3 (Tikanga), JDK 1.5, web container OC4J (whitin Oracle Application Server), Oracle Database 10.1.0.4, JDBC Driver for JDK 1.2 and 1.3, Hibernate version 3.2.6.ga, connection pool library C3P0 version 0.9.1.
UPDATE: Thanks to @BalusC for claryfing that indeed it is Hibernate that executes the query, now I have a better idea about what's going on. I'll explain the way we handle the hibernate session (it's very rudimentary yes, if you have suggestions about how to handle it better they are more than welcome!)
We have a filter (implements javax.servlet.Filter) that when it's starts (init method) it constructs the session factory (supossedly this happens only once). Then every HttpRequest that goes to the application goes through the filter and obtains a new session and it starts a transaction. When the process it's over, it comes back through the filter, makes the commit of the transaction, kills the hibernate session, then continue to the forward page (we don't store the hibernate session in the Http session because it never worked well in our tests).
Now here comes the part where I think the problem is. In our development enviroment we deploy our apps in Tomcat 5.5, and when we start the service all filters start inmediately and only once. In the production enviroment with OC4J doesn't seem to work that way. We deploy the application and only when the first request arrives, OC4J instantiates the filters.
This leads me to think that OC4J instantiates the filters on every request (or at least multiple times, which is still wrong), thus creating a session factory on every request, wich executes that %&%#%$# query, which leads to my problem!
Now, is that correct? It's there a way for me to configure the OC4J for it to instantiate filters only once?
Thanks very much to all of you for taking the time to respond this!
As pointed out by @BalusC, this query is performed during schema validation. But validation is usually done once for all when creating the
SessionFactory
(if activated). Do you call the following method explicitely:Configuration#validateSchema(Dialect, DatabaseMetadata)
?Your implementation of the Open Session In View looks fine (and is very close to the one suggested in this page). And according to the Servlet specification only one instance per
<filter>
declaration in the deployment descriptor is instantiated per Java Virtual Machine (JVMTM) of the container. Since it is very unlikely that this isn't the case with OC4J, I'm tempted to say that there must something else.Can you put some logging in the filter? What about making the
SessionFactory
static (in a good oldHibernateUtil
class)?I believe this query is coming from the Oracle JDBC driver to implement a Hibernate request to retrieve database object info through DatabaseMetaData.
This query shouldn't be too expensive, or at least isn't on a system I have handy. What's your count of all_objects and more importantly, what do you see in the rows/bytes total for the explain plan?
All right, after months of looking at the thing, it turns out that the problem wasn't my web application. The problem was the other Oracle Forms applications that use the same instance (different user) of the database.
What was happening was that the Oracle Forms applications were locking records on the database, therefore making pretty much all of the work of the database extremely slow (including my beloved Hibernate query).
The reason of the locks was that none of the foreign keys of the Oracle Forms apps were indexed. So as my boss explained it to me (he discovered the reason) when a user is editing the master record of a master-detail relationship in a Oracle Form application, the database locks the entire detail table if there is no index for its foreign key. That is because the way the Oracle Forms works, that it updates all af the fields of the master record, including the ones of the primary key, wich are the fields that the foreign key references.
In short, please NEVER leave your foreign keys without indexes. We suffered a lot with this.
Thanks to all of you who took the time to help.
Specifically what happens is that folks who write software that support different databases package their software in a database neutral way. ie. when an override isn't present what they do is use jdbc db metadata getTables call to check if the connection is still valid. Typically you override with select * from dual etc but when that's not done or you don't specifically say what kind of database you are using the software is written to run something that will work with any JDBC driver. jdbc db metadatabase getTables will do that.
Had the same problem, the cause was exactly the one described by Bob Breitling, C3P0 uses by default JDBC API for connection testing :
In order to change this behavior the preferredTestQuery must be set, or if C3P0 is used through hibernate - hibernate.c3p0.preferredTestQuery
I just wanted to put in the workaround I used to get around this problem. We typically have lots of schemas in our databases and this would take hours to finish in the application we were trying to use which used hibernate because of the large number of objects that it ended up checking (the query itself would execute fast but it just did so many of them).
What I did is overrode the ALL_OBJECTS view in the schema being connected to so that it only brought back it's own objects and not all objects in the db.
e.g.
CREATE OR REPLACE VIEW ALL_OBJECTS AS SELECT USER OWNER, O.* FROM USER_OBJECTS O;
It's not the greatest solution but for this application there is nothing else that would be using the ALL_OBJECTS view so it works fine and starts up substantially faster.