When building and tuning a query in Oracle, speed is generally the main concern for the developer. However, in tuning a particular query, I recently tried the FIRST_ROWS and NO_CPU_COSTING hints and an execution plan was generated that is 80% faster than the previous plan in execution time, but at a 300% higher cost. There is very little I/O in the execution plan, and it appears that all the additional cost comes from a nested loop outer join between two views.
This query is paginated, so I will only ever need the first few hundred rows. The lack of significant I/O leads me to think that this query will not be cache-dependent, and at first glance it seems like the way to go. However, since I've never seen a query increase in speed and cost so much at the same time, I'm not sure what the drawbacks to using this query might be. Are there any?
This is pretty typical of a query with an equijoin that is optimised to use a hash join when the full data set is required, and a nested loop when only the first few rows are needed, or where a sort is used for an order by on the full date set where an index can be more efficiently used for a subset.
Of course if the optimiser is not aware that you are only going to use a subset of the rows then it is not giving the cost for the query that you will actually execute, as it includes the cost for all the nested loop operations that are never going to execute.
However, there is nothing incorrect about the estimated cost, it just is what it is. If you want a more meaningful figure for your own understanding then use a rownum limit.
By the way, FIRST_ROWS is deprecated in favour of first_rows(1), first_rows(10), first_rows(100) or first_rows(1000).