kota's memex

Simplified analysis of an algorithms efficiency. It's machine independent and gives complexity in terms of input size.

Usually when discussing big-O notation we ignore constants and are most interested in the worst-case-scenario runtime.

From fast to slow:

O(1) - Constant time

Independant of input size, N

x = 5 + (15 * 20);

O(n) - Linear time

for x in range (o, n):
  print x

O(n^2) - Quadratic time

for x in range (o, n):
  for y in range (o, n):
    print x * y;

Details

Big O notation describes the complexity of your code using algebraic terms. For example, O(n^2), pronounced "Big O squared". The letter n represents input size, and the function g(n) = n^2 inside the O() gives up an idea of how complex the algorithm is with respect to the input size.

A typical example of O(n^2) complexity would be the selection sort algorithm. The algorithm can be described by the following code. In order to make sure the ith element is the ith smallest element in the list, this algorithm first iterates through the list with a for loop. Then for every element it uses another for loop to find the smallest element in the remaining part of the list.

SelectionSort(List) {
  for(i from 0 to List.Length) {
    SmallestElement = List[i]
    for(j from i to List.Length) {
      if(SmallestElement > List[j]) {
        SmallestElement = List[j]
      }
    }
    Swap(List[i], SmallestElement)
  }
}

In this scenario, we consider the variable List as the input, thus input size n is the number of elements inside List. Assume the if statement, and the value assignment bounded by the if statement, takes constant time. Then we can find the big O notation for the SelectionSort function by analyzing how many times the statements are executed.

First the inner loop runs the statements inside n times. Then, after i is incremented, the inner for loop runs for n-1 times ... until it runs once, then both of the loops terminate.

This actually ends up giving us a geometric sum, and with some high-school math we would find that the inner loop will repeat for 1 + 2 ... + n times, which equals n(n-1)/2 times. If we multiply this out, we will end up getting n^2/2-n/2.

When we calculate big O notation, we only care about the dominant terms, and we do not care about the coefficients. Thus we take the n^2 as our final big O. We write it as O(n^2).