How is it different from asymptotic analysis? When do you use it, and why?
I've read some articles that seem to have been written well, like these:
http://www.ugrad.cs.ubc.ca/~cs320/2010W2/handouts/aa-nutshell.pdf
http://www.cs.princeton.edu/~fiebrink/423/AmortizedAnalysisExplained_Fiebrink.pdf
but I've still not understood fully these concepts.
So, can anyone please simplify it for me?
Asymptotic analysis
This term refers to the analysis of algorithm performance under the assumption that the data the algorithm operates on (the input) is, in layman's terms, "large enough that making it larger will not change the conclusion". Although the exact size of the input does not need to be specified (we only need an upper bound), the data set itself has to be specified.
Note that so far we have only talked about the method of analysis; we have not specified exactly which quantity we are analyzing (time complexity? space complexity?), and neither have we specified which metric we are interested in (worst case? best case? average?).
In practice the term asymptotic analysis commonly refers to upper bound time complexity of an algorithm, i.e. the worst case performance measured by total running time, which is represented by the big-Oh notation (e.g. a sorting algorithm might be
O(nlogn)
).Amortized analysis
This term refers to the analysis of algorithm performance based on a specific sequence of operations that targets the worst case scenario -- that is, amortized analysis does imply that the metric is worst case performance (although it still does not say which quantity is being measured). To perform this analysis, we need to specify the size of the input, but we do not need to make any assumptions about its form.
In layman's terms, amortized analysis is picking an arbitrary size for the input and then "playing through" the algorithm. Whenever a decision that depends on the input must be made, the worst path is taken¹. After the algorithm has run to completion we divide the calculated complexity by the size of the input to produce the final result.
¹note: To be precise, the worst path that is theoretically possible. If you have a vector that dynamically doubles in size each time its capacity is exhausted, "worst case" does not mean to assume that it will need to double upon every insertion because the insertions are processed as a sequence. We are allowed to (and indeed must) use known state to mathematically eliminate as many "even worse" cases as we can, even while the input remains unknown.
The most important difference
The critical difference between asymptotic and amortized analysis is that the former is dependent on the input itself, while the latter is dependent on the sequence of operations the algorithm will execute.
Therefore: