Oracle: Bulk Collect performance

2019-01-17 05:13发布

Can you help me to understand this phrase?

Without the bulk bind, PL/SQL sends a SQL statement to the SQL engine for each record that is inserted, updated, or deleted leading to context switches that hurt performance.

标签: oracle bulk
2条回答
The star\"
2楼-- · 2019-01-17 05:39

AS I understand this, there are two engine involved, PL/SQL engine and SQL Engine. Executing a query that make use of one engine at a time is more efficient than switching between the two

Example:

  INSERT INTO t VALUES(1)

is processed by SQL engine while

  FOR Lcntr IN 1..20

  END LOOP

is executed by PL/SQL engine

If you combine the two statement above, putting INSERT in the loop,

FOR Lcntr IN 1..20
  INSERT INTO t VALUES(1)
END LOOP

Oracle will be switching between the two engines, for the each (20) iterations. In this case BULK INSERT is recommended which makes use of PL/SQL engine all through the execution

查看更多
我想做一个坏孩纸
3楼-- · 2019-01-17 05:40

Within Oracle, there is a SQL virtual machine (VM) and a PL/SQL VM. When you need to move from one VM to the other VM, you incur the cost of a context shift. Individually, those context shifts are relatively quick, but when you're doing row-by-row processing, they can add up to account for a significant fraction of the time your code is spending. When you use bulk binds, you move multiple rows of data from one VM to the other with a single context shift, significantly reducing the number of context shifts, making your code faster.

Take, for example, an explicit cursor. If I write something like this

DECLARE
  CURSOR c 
      IS SELECT *
           FROM source_table;
  l_rec source_table%rowtype;
BEGIN
  OPEN c;
  LOOP
    FETCH c INTO l_rec;
    EXIT WHEN c%notfound;

    INSERT INTO dest_table( col1, col2, ... , colN )
      VALUES( l_rec.col1, l_rec.col2, ... , l_rec.colN );
  END LOOP;
END;

then every time I execute the fetch, I am

  • Performing a context shift from the PL/SQL VM to the SQL VM
  • Asking the SQL VM to execute the cursor to generate the next row of data
  • Performing another context shift from the SQL VM back to the PL/SQL VM to return my single row of data

And every time I insert a row, I'm doing the same thing. I am incurring the cost of a context shift to ship one row of data from the PL/SQL VM to the SQL VM, asking the SQL to execute the INSERT statement, and then incurring the cost of another context shift back to PL/SQL.

If source_table has 1 million rows, that's 4 million context shifts which will likely account for a reasonable fraction of the elapsed time of my code. If, on the other hand, I do a BULK COLLECT with a LIMIT of 100, I can eliminate 99% of my context shifts by retrieving 100 rows of data from the SQL VM into a collection in PL/SQL every time I incur the cost of a context shift and inserting 100 rows into the destination table every time I incur a context shift there.

If can rewrite my code to make use of bulk operations

DECLARE
  CURSOR c 
      IS SELECT *
           FROM source_table;
  TYPE  nt_type IS TABLE OF source_table%rowtype;
  l_arr nt_type;
BEGIN
  OPEN c;
  LOOP
    FETCH c BULK COLLECT INTO l_arr LIMIT 100;
    EXIT WHEN l_arr.count = 0;

    FORALL i IN 1 .. l_arr.count
      INSERT INTO dest_table( col1, col2, ... , colN )
        VALUES( l_arr(i).col1, l_arr(i).col2, ... , l_arr(i).colN );
  END LOOP;
END;

Now, every time I execute the fetch, I retrieve 100 rows of data into my collection with a single set of context shifts. And every time I do my FORALL insert, I am inserting 100 rows with a single set of context shifts. If source_table has 1 million rows, this means that I've gone from 4 million context shifts to 40,000 context shifts. If context shifts accounted for, say, 20% of the elapsed time of my code, I've eliminated 19.8% of the elapsed time.

You can increase the size of the LIMIT to further reduce the number of context shifts but you quickly hit the law of diminishing returns. If you used a LIMIT of 1000 rather than 100, you'd eliminate 99.9% of the context shifts rather than 99%. That would mean that your collection was using 10x more PGA memory, however. And it would only eliminate 0.18% more elapsed time in our hypothetical example. You very quickly reach a point where the additional memory you're using adds more time than you save by eliminating additional context shifts. In general, a LIMIT somewhere between 100 and 1000 is likely to be the sweet spot.

Of course, in this example, it would be more efficient still to eliminate all context shifts and do everything in a single SQL statement

INSERT INTO dest_table( col1, col2, ... , colN )
  SELECT col1, col2, ... , colN
    FROM source_table;

It would only make sense to resort to PL/SQL in the first place if you're doing some sort of manipulation of the data from the source table that you can't reasonably implement in SQL.

Additionally, I used an explicit cursor in my example intentionally. If you are using implicit cursors, in recent versions of Oracle, you get the benefits of a BULK COLLECT with a LIMIT of 100 implicitly. There is another StackOverflow question that discusses the relative performance benefits of implicit and explicit cursors with bulk operations that goes into more detail about those particular wrinkles.

查看更多
登录 后发表回答