I run an insert from Oracle sql tool that selects about 100.000 rows from various tables and inserts them in another table at different databsae and this ends in 3 mins. I try the same query inside a stored procedure or PLSQL in oracle and we run for about 2 hours. For information i use Source Oracle 11g and the destination Oracle 10g
Stored Procedure (takes about 2 hour per store):
CREATE OR REPLACE PROCEDURE MGS.TRANSFER_DATA(a_date in varchar2) -- yyyymmdd
BEGIN
BEGIN
FOR xx IN(
SELECT STR_CD FROM MGS.STORE ORDER BY STR_CD ASC
) LOOP
CALL_LOG(xx.STR_CD,a_date,sysdate,"START INSERT");
INSERT INTO STOCK@BBS
SELECT
STR_CD, STK.DT, PROD_CD, QTY, ORDER_QTY, ORDER_QTY
FROM
MGS.STOCK STK, MGS.SALE SAL
WHERE STK.STR_CD = xx.STR
AND STK.STR_CD = SAL.STR_CD(+)
AND STK.PROD_CD = SAL.PROD_CD(+)
AND STK.DT = SAL.DT(+)
AND STK.DT = a_date;
CALL_LOG(xx.STR_CD,a_date,sysdate,"INSERT SUCESSFULL");
END LOOP;
END;
END TRANSFER_DATA;
I tried query for 1 store (just take 3 minute):
INSERT INTO STOCK@BBS
SELECT
STR_CD, STK.DT, PROD_CD, QTY, ORDER_QTY, ORDER_QTY
FROM
MGS.STOCK STK, MGS.SALE SAL
WHERE STK.STR_CD = 'STORE01'
AND STK.STR_CD = SAL.STR_CD(+)
AND STK.PROD_CD = SAL.PROD_CD(+)
AND STK.DT= SAL.DT(+)
AND STK.DT= '20120801'; -- yyyymmdd
Try to start all these inserts in parallel by dbms_job for each store. May be it permits to acheive the same performance as conventional sql. I think dbms_parallel_execute could help you as well, please refer to http://www.oracle-base.com/articles/11g/dbms_parallel_execute_11gR2.php
I haven't got the answer why the query in plsql become slowly, but i have solution bellow :
My best guess is that the plan being generated for the query in the procedure differs significantly from the plan generated for the stand-alone query. In the stand-alone version you've got constants in there which allow the optimizer to make some good assumptions. Without the constants the optimizer has less to go on and is thus probably make some different decisions. Try this:
Get the execution plan for the stand-alone query.
Get the execution plan for the following (which just replaces your loop with a subquery refactoring clause):
(Note that I had to double-quote the columns named
DATE
. I'm surprised you didn't encounter this). Compare the plans. The second plan is obviously going to differ from the first due to the presence of the subquery factoring clause, but try to compare the matching elements. In particular look for elements where the first query is using an index while the second is performing a full table scan.Add hints as necessary to the second query to get its plan to match the first as closely as possible.
The other possibility, which I consider unlikely, is that the logging calls are where the time is being spent. For grins you might try commenting the logging out to see if that has an effect (which I wouldn't expect, but life has thus far been filled with things I didn't expect :-).
Share and enjoy.