- Describes how the algorithm scales and performs, in terms of either the execution time required or the space used.
- Is relative representation of complexity. This allows you to reduce an algorithm to a variable which in turn allows you to easily compare it to another.
- Describes an upper limit on the growth of a function, in the other words the "worst case scenario".
There is also Big-Omega notation which looks at the lower bound / "best case scenario" stating that the algorithm will take at least X amount of time and Big-Theta which is tight bound to both lower and upper / "average".
Some quick observations in determining Big-O:
- A Sequence of statements, or things like conditional checks are constant: O(1)
- A loop of statements result in : O(n) n being the number of loop executions.
- Nested loops are multiplied together: O(n2) where n is the times the outer loop executes and m is the times the inner loop executes.
Comparing the common notation examples:
(Thanks to Algorithms: Big-Oh Notation.)
|Linear Logarithmic |
O(n log n)
Java code example:
Show examples of notations in the table above.
Common Data Structures and Relative functions: Lists and Sets:
|TreeSet||O(log n)||O(log n)||O(log n)||O(log n)|
Thanks to reader comments, the add method on an ArrayList should be: O(1) amortized (and O(n) in worse case). Useful reference links:
|TreeMap||O(log n)||O(log n)||O(log n)||O(log n)|