SQL Server developers consider Cursors a bad practise , except under some circumstances. They believe that Cursors do not use the SQL engine optimally since it is a procedural construct and defeats the Set based concept of RDBMS.
However, Oracle developers do not seem to recommend against Cursors. Oracle's DML statements themselves are implicit cursors.
Why this difference in approach ? Is it because of the way these 2 products are made , or does this advise apply to both products?
What's wrong with cursors is that they are often abused, both in Oracle
and in MS SQL
.
Cursor are for keeping a stable resultset which you can retrieve row-by-row. They are implicitly created when your query is run, and closed when it's finished.
Of course keeping such a resultset requires some resources: locks
, latches
, memory
, even disk space
.
The faster these resources are freed, the better.
Keeping a cursor open is like keeping a fridge door open
You don't do it for hours without necessity, but it does not mean you should never open your fridge.
That means that:
- You don't get your results row-by-row and sum them: you call the
SQL
's SUM
instead.
- You don't execute whole query and get the first results from the cursor: you append a
rownum <= 10
condition to your query
, etc.
As for Oracle
, processing your cursors inside a procedure requires infamous SQL/PLSQL context switch
which happens every time you get a result of an SQL
query out of the cursor.
It involves passing large amounts of data between threads and synchronizing the threads.
This is one of the most irritating things in Oracle
.
One of the less evident consequences of that behaviour is that triggers in Oracle should be avoided if possible.
Creating a trigger and calling a DML
function is equal to opening the cursor selecting the updated rows and calling the trigger code for each row of this cursor.
Mere existence of the trigger (even the empty trigger) may slow down a DML
operation 10 times
or more.
A test script on 10g
:
SQL> CREATE TABLE trigger_test (id INT NOT NULL)
2 /
Table created
Executed in 0,031 seconds
SQL> INSERT
2 INTO trigger_test
3 SELECT level
4 FROM dual
5 CONNECT BY
6 level <= 1000000
7 /
1000000 rows inserted
Executed in 1,469 seconds
SQL> COMMIT
2 /
Commit complete
Executed in 0 seconds
SQL> TRUNCATE TABLE trigger_test
2 /
Table truncated
Executed in 3 seconds
SQL> CREATE TRIGGER trg_test_ai
2 AFTER INSERT
3 ON trigger_test
4 FOR EACH ROW
5 BEGIN
6 NULL;
7 END;
8 /
Trigger created
Executed in 0,094 seconds
SQL> INSERT
2 INTO trigger_test
3 SELECT level
4 FROM dual
5 CONNECT BY
6 level <= 1000000
7 /
1000000 rows inserted
Executed in 17,578 seconds
1.47
seconds without a trigger, 17.57
seconds with an empty trigger doing nothing.
From MSDN:Cursor Implementations
Using a cursor is less efficient than
using a default result set. In a
default result set the only packet
sent from the client to the server is
the packet containing the statement to
execute. When using a server cursor,
each FETCH statement must be sent from
the client to the server, where it
must be parsed and compiled into an
execution plan.
If a Transact-SQL statement will
return a relatively small result set
that can be cached in the memory
available to the client application,
and you know before executing the
statement that you must retrieve the
entire result set, use a default
result set. Use server cursors only
when cursor operations are required to
support the functionality of the
application, or when only part of the
result set is likely to be retrieved.
I'm not an Oracle DBA, so I can't really speak to how the implementations are different. However, from a programming standpoint, set based operations are almost always faster than processing results in a cursor.
I have always been told that cursors where evil, but always by MS SQL Server gurus, because of it's bad performance. Regarding Oracle's PL/SQL I found this saying when to use cursors:
Not using cursors results in repeated parses. If bind variables are not used, then there is hard parsing of all SQL statements. This has an order of magnitude impact in performance, and it is totally unscalable. Use cursors with bind variables that open the cursor and execute it many times. Be suspicious of applications generating dynamic SQL.
As cursors are implicitly created on every operation, it doesn't seem so performance-punishing to use them when needed :)
Remember that Oracle's implementation is closer to Postgres than to Sybase (Genesis of MS SQL Server), so performance will be different for each on different tasks.If you can, avoid the hustle of tweak for performance on systems that can swap able back-ends, go for least common denominator if you need to work with both. /tangential_topic
I'm sure someone can explain in more detail, but it basically comes down to cursors in SQL server are SLOW.
why-do-people-hate-sql-cursors-so-much
why-is-it-considered-bad-practice-to-use-cursors-in-sql-server
and a very good article here...
The other answers correctly point out the performance issues with cursors, but they don't mention that SQL and relational databases are best at set-based operations and cursors are fundamentally for iterative operations. There are some operations (in the broader sense) that are easier to perform using cursors, but when working with SQL you should always be thinking about working with sets of data. Cursors are often misused because the coder didn't grasp how to perform the task using set-based operations.