Optimizing query: DBMS_METADATA.GET_DDL (Oracle)

2019-05-27 03:54发布

问题:

I want to get all table definition for all my tables. And I want to do it fast (it is part of a script that I'm running a lot)

I am using oracle 11g, and I have 700 tables. On a plain jdbc code it takes 4 minutes and does:

s = con.statement("select DBMS_METADATA.GET_DDL(object_type,object_name) from user_objects where object_type = 'TABLE');
s.execute();
rs = s.getResultSet();
while(rs.next()){
 rs.getString(1);
}

SO I want to optimize this code and reach around 20 sec.

I have already reached 40-50 sec by creating 14 threads that each opens a connection to the database and reads a part of the information, using mod on the rownum.

But this is not enough.

I am thinking in these directions:

  1. http://docs.oracle.com/cd/B10501_01/java.920/a96654/connpoca.htm#1063660 - connection caching. can it help speed up things by replacing my 14 connections with connectionCaching?

  2. Is it possible to keep the tables accessed by this function, in the KEEP buffer cache area?

  3. Anyway of indexing some of the information here?

  4. Any other suggestions will be greatly appreciated.

Thank you

回答1:

I'm afraid there is no easy to make it faster. The whole GET_DDL thing is implemented in Java and uses XSLT transformation as a part of generation process.

Maybe you will find this faster. http://metacpan.org/pod/DDL::Oracle



回答2:

Is it required to always get the DDL even if the tables haven't been changed? Otherwise only get the DDL of those tables where ALL_OBJECTS.LAST_DDL_TIME has changed since you last retrieved it.

Another option would be to write your own GET_DDL in a way that is able to get more than one table at once.



回答3:

I would firstly go for HAL's suggestion of only capturing changes, but I'd also look at eliminating any options that I do not need -- STORAGE clauses, for example?



标签: java oracle jdbc