I have a table:
create table Transactions(Tid int,amt int)
With 5 rows:
insert into Transactions values(1, 100)
insert into Transactions values(2, -50)
insert into Transactions values(3, 100)
insert into Transactions values(4, -100)
insert into Transactions values(5, 200)
Desired output:
TID amt balance
--- ----- -------
1 100 100
2 -50 50
3 100 150
4 -100 50
5 200 250
Basically for first record balance will be same as amt
, 2nd onwards balance would be addition of previous balance + current amt
. I am looking for an optimal approach. I could think about using function or correlated subquery but not sure exactly how to do it.
For those not using SQL Server 2012 or above, a cursor is likely the most efficient supported and guaranteed method outside of CLR. There are other approaches such as the "quirky update" which can be marginally faster but not guaranteed to work in the future, and of course set-based approaches with hyperbolic performance profiles as the table gets larger, and recursive CTE methods that often require direct #tempdb I/O or result in spills that yield roughly the same impact.
INNER JOIN - do not do this:
The slow, set-based approach is of the form:
The reason this is slow? As the table gets larger, each incremental row requires reading n-1 rows in the table. This is exponential and bound for failures, timeouts, or just angry users.
Correlated subquery - do not do this either:
The subquery form is similarly painful for similarly painful reasons.
Quirky update - do this at your own risk:
The "quirky update" method is more efficient than the above, but the behavior is not documented, there are no guarantees about order, and the behavior might work today but could break in the future. I'm including this because it is a popular method and it is efficient, but that doesn't mean I endorse it. The primary reason I even answered this question instead of closing it as a duplicate is because the other question has a quirky update as the accepted answer.
Recursive CTEs
This first one relies on TID to be contiguous, no gaps:
If you can't rely on this, then you can use this variation, which simply builds a contiguous sequence using
ROW_NUMBER()
:Depending on the size of the data (e.g. columns we don't know about), you may find better overall performance by stuffing the relevant columns only in a #temp table first, and processing against that instead of the base table:
Only the first CTE method will provide performance rivaling the quirky update, but it makes a big assumption about the nature of the data (no gaps). The other two methods will fall back and in those cases you may as well use a cursor (if you can't use CLR and you're not yet on SQL Server 2012 or above).
Cursor
Everybody is told that cursors are evil, and that they should be avoided at all costs, but this actually beats the performance of most other supported methods, and is safer than the quirky update. The only ones I prefer over the cursor solution are the 2012 and CLR methods (below):
SQL Server 2012 or above
New window functions introduced in SQL Server 2012 make this task a lot easier (and it performs better than all of the above methods as well):
Note that on larger data sets, you'll find that the above performs much better than either of the following two options, since RANGE uses an on-disk spool (and the default uses RANGE). However it is also important to note that the behavior and results can differ, so be sure they both return correct results before deciding between them based on this difference.
CLR
For completeness, I'm offering a link to Pavel Pawlowski's CLR method, which is by far the preferable method on versions prior to SQL Server 2012 (but not 2000 obviously).
http://www.pawlowski.cz/2010/09/sql-server-and-fastest-running-totals-using-clr/
Conclusion
If you are on SQL Server 2012 or above, the choice is obvious - use the new
SUM() OVER()
construct (withROWS
vs.RANGE
). For earlier versions, you'll want to compare the performance of the alternative approaches on your schema, data and - taking non-performance-related factors in mind - determine which approach is right for you. It very well may be the CLR approach. Here are my recommendations, in order of preference:SUM() OVER() ... ROWS
, if on 2012 or aboveFor further information with performance comparisons of these methods, see this question on http://dba.stackexchange.com:
https://dba.stackexchange.com/questions/19507/running-total-with-count
I've also blogged more details about these comparisons here:
http://www.sqlperformance.com/2012/07/t-sql-queries/running-totals
Also for grouped/partitioned running totals, see the following posts:
http://sqlperformance.com/2014/01/t-sql-queries/grouped-running-totals
Partitioning results in a running totals query
Multiple Running Totals with Group By
If you use version 2012, here is a solution
For earlier versions
We're on 2008R2 and I use variables and a temp table. This also allows you to do custom things when calculating each row using a case statement (i.e. certain transactions may act differently, or you may only want a total for specific transaction types)
We have a transaction table with 2.3 million rows with an item that has over 3,300 transactions, and running this type of query against that takes no time at all.
In SQL Server 2008+
In SQL server 2012+