Skip to main content

Computation Time

Computation time, also known as runtime or execution time, refers to the amount of time taken by a computer program or algorithm to complete its execution, typically measured in seconds or milliseconds.

Goals​

By the end of this lesson, you should be able to:

  • Define Big-O notation and other asymptotic notations
  • Derive complexity of a code using its computation model
  • Measure computation time for bubble sort, insertion sort, built-in sort, and heapsort
  • Create plots from array data for visualising computational time
Keywords

complexity, time complexity, asymptotic notation, computation time, computational model, logarithmic, linear, log-linear, quadratic, polynomial, exponential

Introduction​

A performance of a computer program can be measured using its computation time and its memory space that it occupies. In this section, we will focus on analysing and measuring computation time.

Measuring Computation Time​

We are interested in the trend on the computation time as the number of input changes. For example, considering the sorting algorithms that we have considered thus far. How does the computation time increases as we increase the number of the input array? In analysing this, we are interested in the upper bound, and so we normally use the Big-Oh notation to indicate the upper bound of a computation time of a program.

We can investigate this empirically by creating a list of different sizes, i.e. 10 elements, 100 elements, etc. We can then ask our sorting algorithm to sort these numbers. We can also compare the performance of the sorting algorithm when the input list is already sorted or when it is randomly shuffled. In this section, we make use an asymptotic notation called the Big Oh notation which is an upper bound limit. We will introduce the mathematical definition of this notation at the end of this notes.

Setup​

We generate the input array of integers with different number of elements from 10 up to 10,000, i.e. 10 elements, 100 elements, 1000 elements, and 10,000 elements. We run the sorting algorithms two times, one is when the input array is randomly shuffled and the second one is when the input array is already sorted from the smallest to the largest. We present the results for different algorithms in the next sections.

Bubble Sort​

If we run version 1 of Bubble Sort algorithm on the randomly shuffled array. The output time in seconds are shown here.

bubbletime = [5.7220458984375e-06, 2.2649765014648438e-05,
0.0014679431915283203, 0.2126140594482422,
25.051520347595215]

We can plot this and see the relationship between the number of elements and the computation time. To see the relationship better in this big range of number, we plot the log of yy and the log of xx.

We can see that the computation time increases as the input increases. Moreover we can see that the relationship is almost a straight line when we plot the logarithmic of the y axis and the logarithmic of the x axis. In fact, the relationship is a quadratic. If we get the slope of this log plot, taking the x-axis between 6 to 10, we get:

slope=(0βˆ’(βˆ’8))10βˆ’6=2slope = \frac{(0 - (-8))}{10 - 6} = 2

which makes sense because if we have

y=x2y = x^2

Taking the log of both sides gives us

log⁑(y)=2log⁑(x)\log(y) = 2 \log(x)

In this equation the slope is 2 as shown on the plot. We can also check if it is quadratic by calculating the square of our input and see if it is a straight line.

This means that the computation time of Bubble Sort algorithm is quadratic. We will write this computation time using an asymptotic Big Oh notation O()O(). The mathematical definition of this notation will be introduced at the end.

T(n)=O(n2)T(n) = O(n^2)

On the other hand, this is the computation time when the input is already sorted.

bubbletimeSorted = [6.4373016357421875e-06, 1.9073486328125e-06,
4.291534423828125e-06, 3.147125244140625e-05, 0.00030159950256347656]

We can plot this again on the same input.


Taking the slope between 8 to 11, we have approximately the following slope:
slope=(βˆ’8βˆ’(βˆ’11))(11βˆ’8)=1slope = \frac{(-8 - (-11))}{(11 - 8)} = 1

We don't take the first few data points because it is is limited by the accuracy of the floating point number in Python when the number is too small, e.g. β‰ˆ10βˆ’12\approx 10^{-12}.

Notice that the computation time now falls in a straight line with a slope of about 1. This shows that when the input is already sorted, the computation time increases linearly instead of quadratically as the input increases. This means that in the best case scenario for the computation time is linear.

T(n)=O(n)T(n) = O(n)

Insertion Sort​

We can do the same with Insertion Sort Algorithm. Below is the output when the input is randomly shuffled.

insertiontime = [6.198883056640625e-06, 7.867813110351562e-06,
0.0006382465362548828, 0.06774091720581055, 6.839613199234009]

We can plot this with the same input.

We can again notice that the computation time increases in this logarithmic scales with a slope of about 2. This means that the computation time is also quadratic.

For now, we can say that the computation time for Insertion Sort is quadratic.

T(n)=O(n2)T(n) = O(n^2)

On the other hand, this is the output when the input is already sorted.

insertiontimeSorted = [5.7220458984375e-06, 1.430511474609375e-06,
4.0531158447265625e-06, 3.123283386230469e-05, 0.0003333091735839844]

And if we plot, we will see the following.

Similarly, in this plot, looking at the x axis between 7 to 11, the slope is about 1 indicating that the computation time is linear. So the computation time when the input is already sorted is linearly increasing with the input numbers, similar to Bubble Sort. This means that in the best case scenario, the computation time for Insertion Sort is linear.

T(n)=O(n)T(n) = O(n)

Heapsort​

We can now check the computation time for heapsort algorithm. The computation time for randomly shuffled array is as shown below.

heapsorttime = [5.0067901611328125e-06, 7.867813110351562e-06,
9.512901306152344e-05, 0.0012400150299072266,
0.015644311904907227, 0.21677017211914062]

A quick look actually shows that heapsort is much faster the other two. Let's plot it.

We can see that the logarithmic plot has the slope of about:

(βˆ’4βˆ’(βˆ’8))(12βˆ’8)β‰ˆ44β‰ˆ1\frac{(-4-(-8))}{(12-8)} \approx \frac{4}{4} \approx 1

The slope is actually slightly greater than one if we compute accurately. We found out that the computation time is not linear and it is not quadratic. You can notice that the computation time for 1,000,000 input elements are only about 0.2 seconds. The other algorithms were too slow to compute this amount of input elements and so we only compute up to 100,000 elements. For 100,000 elements, Bubble Sort takes about 25 seconds while Insertion Sort takes about 7 seconds. Compare this with heapsort which only takes 0.016 seconds.

It turns out that the computation time for Heapsort is logarithmic. We can see a linear relationship if the x-axis is computed as nβˆ—log(n)n*log(n). We will plot the y axis as it is. See the plot below.

Notice that now the points fall almost in a straight line. This means that the computation time for heapsort is:

T(n)=O(nlog⁑(n))T(n) = O(n\log(n))

Now, what happens when we run the algorithm on an already sorted list? It turns out, that the computation time for different number of input elements are as follows.

heapsorttimeSorted = [1.2874603271484375e-05, 7.3909759521484375e-06,
7.82012939453125e-05, 0.0008978843688964844,
0.009733200073242188, 0.11059808731079102]

It turns out that the computation time is still

T(n)=O(nlog⁑(n))T(n) = O(n\log(n))

We can plot this after modifying the x-axis accordingly.

Let's summarize our findings in a table form.

Sorting AlgorithmRandom ListSorted List T(n)T(n)
Bubble SortO(n2)O(n^2)O(n)O(n)
Insertion SortO(n2)O(n^2)O(n)O(n)
HeapsortO(nlog⁑(n))O(n\log(n))O(nlog⁑(n))O(n\log(n))

Analysing Computation Time Using Model​

Given a computer program, can we analyse what is the computation time without resolving to the experimentation as we did in the previous section? In this section, we will introduce how we can analyse computer program by looking into the code and applying some computational time model.

To do so, we are going to indicate using asymptotic notation. In particular we will use the big Oh notation for this purpose.

Bubble Sort Computation Time​

Let's take a look at the pseudocode version 1 for Bubble Sort.

Bubble Sort
Version: 1
Input: array
Output: None, sort in place
Steps:
1. n = length of array
2. For outer_index from 1 to n-1, do:
2.1 For inner_index from 1 to n-1, do:
2.1.1 first_number = array[inner_index - 1]
2.1.2 second_number = array[inner_index]
2.1.3 if first_number > second_number, do:
2.1.3.1 swap(first_number, second_number)

Computation time:

  • step 1: constant time, O(1)O(1).
  • steps 2: n-1 times, so O(nβˆ’1)=O(n)O(n-1) = O(n)
    • step 2.1: n-1 times, so O(n)O(n)
      • step 2.1.1 to 2.1.3, are all constant time, so it's 3Γ—O(1)=O(1)3 \times O(1) = O(1)

So we can compute the computation time for Bubble Sort as follows.

T(n)=O(1)+O(n)Γ—(O(n)Γ—(O(1)))T(n) = O(1) + O(n) \times (O(n) \times( O(1)))

This can be simplified to

T(n)=O(1)+O(n)Γ—O(n)T(n) = O(1) + O(n) \times O(n)

T(n)=O(1)+O(n2)T(n) = O(1) + O(n^2)

T(n)=O(n2)T(n) = O(n^2)

This agrees with the previous section when we state that computational time for Bubble Sort is quadratic. We can also say that the computation is polynomial time.

Let's take a look at version 4 of Bubble Sort.

Bubble Sort
Version: 4
Input: array
Output: None, sort in place
Steps:
1. n = length of array
2. swapped = True
3. As long as swapped is True, do:
3.1 swapped = False
3.2 new_n = 0
3.3 For inner_index from 1 to n-1, do:
3.3.1 first_number = array[inner_index - 1]
3.3.2 second_number = array[inner_index]
3.3.3 if first_number > second_number, do:
3.3.3.1 swap(first_number, second_number)
3.3.3.2 swapped = True
3.3.3.3 new_n = inner_index
3.4 n = new_n

Computation time:

  • steps 1 and 2 are constant time, it's 2Γ—O(1)2 \times O(1) or simply O(1)O(1).
  • step 3 is executed as long as there is a swap. In this case, we can consider the worst case scenario. In the worst case, the number of swap will be the same as nβˆ’1n-1. In this case the computation time is O(n)O(n). In the best case scenario, there is no swap because it is already sorted, and the computation time is O(1)O(1). So on average, the computation time is O(n/2)O(n/2), which is the same as O(n)O(n).
    • steps 3.1 and 3.2 are constant time, it's O(1)O(1).
    • step 3.3 is nβˆ’1n-1 times, so the computation time is O(n)O(n).
      • steps 3.3.1 to 3.3.3 are all constant time, it's O(1)O(1).
    • step 3.4 is also constant time, O(1)O(1).

We can calculate the computation time on average as:

T(n)=2Γ—O(1)+O(n)Γ—(2Γ—O(1)+O(n)Γ—(3Γ—O(1))+O(1))T(n) = 2 \times O(1) + O(n) \times (2\times O(1) + O(n) \times (3\times O(1)) + O(1))

This can be simplified to

T(n)=O(1)+O(n)Γ—(O(1)+O(n)+O(1))T(n) = O(1) + O(n) \times(O(1) + O(n) + O(1))

T(n)=O(1)+O(n)Γ—O(n)T(n) = O(1) + O(n) \times O(n)

T(n)=O(1)+O(n2)T(n) = O(1) + O(n^2)

T(n)=O(n2)T(n) = O(n^2)

So the optimised Bubble Sort in version 4 is still polynomial time, which is quadratic.

Insertion Sort Computation Time​

Now, let's consider Insertion Sort computation time. We will use version 2 of Insertion Sort pseudocode.

Insertion Sort
Version: 2
Input: array
Output: None, sort in place
Steps:
1. n = length of array
2. For outer_index in Range(from 1 to n-1), do:
2.1 inner_index = outer_index # start with the i-th element
2.2 temporary = array[inner_index]
2.3 As long as (inner_index > 0) AND (temporary < array[inner_index - 1]), do:
2.3.1 array[inner_index] = array[inner_index - 1]) # shift to the right
2.3.2 inner_index = inner_index - 1 # move to the left
2.4 array[inner_index] = temporary # save temporary to its final position

Computation time:

  • step 1 is O(1)O(1)
  • step 2 is executed nβˆ’1n-1, so the time is O(n)O(n).
    • steps 2.1 and 2.2 are all constant time, so it is O(1)O(1).
    • step 2.3 is executed depending on the actual values in the array. In the worst case it is nβˆ’1n-1 times, and in the best case it's already ordered and executed as constant time. In average, then the computation time is O(n/2)O(n/2) or simply O(n)O(n).
      • steps 2.3.1 and 2.3.2 are constant time, i.e. O(1)O(1).
    • step 2.4 is also constant time, i.e. O(1)O(1).

We can calculate the computation time as follows.

T(n)=O(1)+O(n)Γ—(2Γ—O(1)+O(n)Γ—(O(1))+O(1))T(n) = O(1) + O(n) \times (2 \times O(1) + O(n) \times(O(1)) + O(1) )

T(n)=O(1)+O(n)Γ—(O(1)+O(n)+O(1))T(n) = O(1) + O(n) \times ( O(1) + O(n) + O(1) )

T(n)=O(1)+O(n)Γ—O(n)T(n) = O(1) + O(n) \times O(n)

T(n)=O(1)+O(n2)T(n) = O(1) + O(n^2)

T(n)=O(n2)T(n) = O(n^2)

So Insertion sort is similar to Bubble sort in its computational time, which is a polynomial time and it's quadratic relationship with respect to the number of input.

Heapsort​

Now, we will consider heapsort algorithm's computation time. The pseudocode can be written as follows:

def heapsort(array):
Input:
- array: any arbitrary array
Output: None
Steps:
1. call build-max-heap(array)
2. heap_end_pos = length of array - 1 # index of the last element in the heap
3. As long as (heap_end_pos > 0), do:
3.1 swap( array[0], array[heap_end_pos])
3.2 heap_end_pos = heap_end_pos -1 # reduce heap size
3.3 call max-heapify(array[from index 0 to heap_end_pos inclusive], 0)

Computation time:

  • step 1 depends on computation time for build-max-heap() algorithm. We'll come to this later.
  • step 2 is constant time, i.e. O(1)O(1)
  • step 3 is done from nβˆ’1n-1 , which is the last element in the array, down to 1, which is the second element. This means that it is executed nβˆ’1n-1 times, so the computation time is O(n)O(n).
    • steps 3.1 and 3.2 are constant time, O(1)O(1).
    • step 3.3 depends on computation time for max-heapify().

To get the computation time for heapsort, we need to check what is the computation time for build-max-heap() and max-heapify. Let's start with the first one.

Computation Time for Build-Max-Heap​

def build-max-heap(array):
Input:
- array: arbitrary array of integers
Output: None, sort the element in place
Steps:
1. n = length of array
2. starting_index = integer(n / 2) - 1 # start from the middle or non-leaf node
3. For current_index in Range(from starting_index down to 0), do:
3.1 call max-heapify(array, current_index)

Computation time:

  • step 1 and 2 are constant time, O(1)O(1).
  • step 3 is fixed and executed for n/2n/2 times. We can say that the computation time is O(n)O(n).
  • step 3.1 on the other hand depends on the computation time for max-heapify.

Computation Time for Max-Heapify​

def max-heapify(A, i):
version: 2
Input:
- A = array storing the heap
- i = index of the current node to restore max-heap property
Output: None, restore the element in place
Steps:
1. current_i = i # current index starting from input i
2. swapped = True
3. As long as ( left(current_i) < length of array) AND swapped, do:
3.1 swapped = False
3.2 max_child_i = get the index of largest child of the node current_i
3.3 if array[max_child_i] > array[current_i], do:
3.3.1 swap( array[max_child_i], array[current_i])
3.3.2 swapped = True
3.3 current_i = max_child_i # move to the index of the largest child

Computation time for max-heapify():

  • step 1 and 2 are constant time, O(1)O(1).
  • step 3, in average is executed in O(log⁑(n))O(\log(n)) time.
    • steps 3.1 to 3.4 are all constant time, O(1)O(1).
      • steps 3.3.1 and 3.3.2 are also constant time, O(1)O(1).

In this case, computation time is:

T(n)=O(1)+O(log⁑(n))Γ—(O(1)Γ—O(1))T(n) = O(1) + O(\log(n)) \times (O(1) \times O(1))

T(n)=O(log⁑(n))T(n) = O(\log(n))

How do we get the log⁑(n)\log(n) time for step 3?​

You can skip this part if you want. But for those curious, let's represent some example of binary tree and calculate the number of levels and its nodes or elements. In the below table, we represent the nodes at different levels with different symbols. For example, when there are 3 levels (level 2 row in the table), the only node in level 0 is represented as o, level 1 nodes are represented as x (there are two of them), and level 2 nodes are represented as o again (there are four of them, two for each x node in level 1).

leveldiagram (different level using different symbol)nodes at level itotal number of nodes
0o11
1o xx23
2o x x oo oo47
3o x x oo oo xx xx xx xx815

From the above table we can see that the maximum number of elements at level i can be calculated from the level position as

ni=2in_{i} = 2^i

For example, at level i=3i=3, the maximum number of elements can be up to n=23=8n = 2^3 = 8. The total number of elements can, therefore, be calculated from

n=βˆ‘i=0imax2in = \sum^{i_{max}}_{i = 0} 2^i

In a geometric series, we can find the closed form of the summation as follows.

a+ar+ar2+…+arnβˆ’1=a(1βˆ’rn)1βˆ’ra + ar + ar^2 + \ldots + ar^{n-1} = \frac{a(1-r^n)}{1-r}

In our summation for the number of nodes nn in a binary heap, a=1a=1 and r=2r=2, and the number of terms is imax+1i_{max}+1. Therefore,

n=(1βˆ’2imax+1)1βˆ’2n = \frac{(1-2^{i_{max}+1})}{1-2}
n=2imax+1βˆ’1n = 2^{i_{max}+1}-1

For example, if there are three levels, imax=3i_{max} = 3, then the maximum total number of elements is n=24βˆ’1=16βˆ’1=15n = 2^4-1 =16-1=15.

This also means that we can get the number of levels from the number of elements in the tree by doing its inverse function as follows.

level=⌊log⁑2(n)βŒ‹level =\lfloor{ \log_2(n)}\rfloor

In computing, we usually understand that the base of the log is 2. Therefore, we usually omit the base in our equation.

level=⌊log⁑(n)βŒ‹level =\lfloor{ \log(n)}\rfloor

Let's come back to step 3. This step is executed by moving current_i from one node to another node. The movement is always from the parent to the child node. This means that it moves in the vertical direction in the binary tree across the levels. Since there are log⁑(n)\log(n) levels, the maximum number of moves will be log⁑(n)βˆ’1\log(n)-1. That's why we say that the computation time is O(log⁑(n))O(\log(n)).

Total Computation Time for Heapsort​

Let's recall and summarize the computation time of the different procedures.

  • Heapsort

    T(n)=Tbuildβˆ’maxβˆ’heap(n)+O(1)+O(n)Γ—(O(1)+Tmaxβˆ’heapify(n))T(n) = T_{build-max-heap}(n) + O(1) + O(n) \times(O(1)+ T_{max-heapify}(n))

    T(n)=Tbuildβˆ’maxβˆ’heap(n)+O(n)Γ—Tmaxβˆ’heapify(n)T(n) = T_{build-max-heap}(n) + O(n) \times T_{max-heapify}(n)

  • Build-Max-Heap

    Tbuildβˆ’maxβˆ’heap(n)=O(1)+O(n)Γ—Tmaxβˆ’heapifyT_{build-max-heap}(n) = O(1) + O(n) \times T_{max-heapify}

    Tbuildβˆ’maxβˆ’heap(n)=O(n)Γ—Tmaxβˆ’heapifyT_{build-max-heap}(n) = O(n) \times T_{max-heapify}

  • Max-Heapify

    Tmaxβˆ’heapify(n)=O(log⁑(n))T_{max-heapify}(n) = O(\log(n))

From this we can calculate that the computation time for build-max-heapify() is

Tbuildβˆ’maxβˆ’heap(n)=O(n)Γ—O(log⁑(n))T_{build-max-heap}(n) = O(n) \times O(\log(n))

Tbuildβˆ’maxβˆ’heap(n)=O(nlog⁑(n))T_{build-max-heap}(n) = O(n\log(n))

We can then substitute this to the computation time for Heapsort to get

T(n)=O(nlog⁑(n))+O(n)Γ—O(log⁑(n))T(n) = O(n\log(n)) + O(n) \times O(\log(n))

T(n)=O(nlog⁑(n))+O(nlog⁑(n))T(n) = O(n\log(n)) + O(n\log(n))

T(n)=O(nlog⁑(n))T(n) = O(n\log(n))

Computational Time Summary​

In summary, different algorithm may have different performance in terms of computational time. The following image shows the different plots for some common computational time in computing.

In our examples above, both Bubble Sort and Insertion sort is quadratic while Heapsort is log linear. We can see that Heapsort is much faster as the number of elements grows big.

Asymptotic Notation​

Asymptotic notation is a shorthand used to give a quick measure of the behaviour of a function f(n)f(n) as nn grows large. Here, we will introduce several notations that is common in analysing computer programs.

Big Oh​

Big Oh is the most frequently used notation in computing as it is used to give an upper bound on the growth of a function.

f=O(g)f = O(g)

if and only if,

lim⁑xβ†’βˆžsupf(x)g(x)<∞\lim_{x\to \infty}\text{sup} \frac{f(x)}{g(x)} < \infty

Note the word "sup" means superior or above as it indicates the upper bound.

For example,

x2+100x+10=O(x2)x^2 + 100x+10 = O(x^2)

This is because

lim⁑xβ†’βˆžsupx2+100x+10x2=1<∞\lim_{x\to\infty} \text{sup} \frac{x^2+100x+10}{x^2} = 1 < \infty

Little Oh​

This notation is to indicate that ff is asymptotically smaller than gg, or in symbols,

f(x)=o(g(x))f(x) = o(g(x))

This is True if and only if

lim⁑xβ†’βˆžf(x)g(x)=0\lim_{x\to \infty} \frac{f(x)}{g(x)} = 0

For example,

1000x1.9=o(x2)1000x^{1.9} = o(x^2)

This is because

lim⁑xβ†’βˆž1000x1.9x2=lim⁑xβ†’βˆž1000x0.1=0\lim_{x\to\infty} \frac{1000x^{1.9}}{x^2} = \lim_{x\to\infty} \frac{1000}{x^{0.1}} = 0

Example of Big Oh and Little Oh​

What's the difference between Big Oh and Little Oh. To illustrate this difference, let's take a look at three functions.

  • f(x)=2x+4f(x) = 2x + 4
  • g(x)=3x2+4x+5g(x) = 3x^2 + 4x +5
  • h(x)=2x2+7h(x) = 2x^2 + 7

Let's check if f(x)f(x) and g(x)g(x) has Big Oh or Little Oh relationship with respect to h(x)h(x). To do this, we will apply the mathematical definition. Let's start with f(x)f(x). We are asking, what is the limit of the following.

lim⁑xβ†’βˆž2x+42x2+7=lim⁑xβ†’βˆž2x+4x22+7x2=0\lim_{x\to\infty} \frac{2x+4}{2x^2+7} = \lim_{x\to\infty}\frac{\frac{2}{x}+\frac{4}{x^2}}{2+\frac{7}{x^2}} = 0

Since the limit when xx approaches infinity is zero, it satisfies the little oh definition. This means that f=o(h)f = o(h).

On the other hand, when we apply this to g(x)g(x), we will get the following.

lim⁑xβ†’βˆž3x2+4x+52x2+7=lim⁑xβ†’βˆž3+4x+5x22+7x2=32<∞\lim_{x\to\infty}\frac{3x^2+4x+5}{2x^2+7} = \lim_{x\to\infty}\frac{3+\frac{4}{x}+\frac{5}{x^2}}{2+\frac{7}{x^2}} = \frac{3}{2} < \infty

This satisfies the Big Oh notation and so we can say that g=O(h)g = O(h).

We can see that in this asymptotic notation, Little Oh notation behaves like << in relational operation of arithmetic. Big Oh behaves like <=<= relational operation. You may think that gg is not less than hh since the constant is b igger than 1. However, we do still say that when it approaches infinity the function gg is bounded by hh by some constant. The ratio does not grow to infinity.

Big Omega​

The previous notation "Big Oh" is used to indicate an upper bound. The notation to indicate the lower bound is Big Omega, or simply Omega.

f=Ξ©(g)f = \Omega(g)

if and only if there exist a constant cc and an x0x_0 such that for all xβ‰₯x0x\ge x_0, we have

f(x)β‰₯c∣g(x)∣f(x) \ge c |g(x)|

In other words, this simply means that f(x)f(x) is greater than or equal to g(x)g(x). As you can guess, this sounds like Big-Oh in reverse.

f(x)=O(g(x))Β ifΒ andΒ onlyΒ ifΒ g(x)=Ξ©(f(x))f(x) = O(g(x)) \text{ if and only if } g(x) = \Omega(f(x))

For example,

x2=Ξ©(x)x^2 = \Omega(x)

We will use the definition for Big Oh to prove this by exchanging the terms ff and gg. We are going to prove that

x=O(x2)x = O(x^2)

This is true because

lim⁑xβ†’βˆžxx2=lim⁑xβ†’βˆž1x<∞\lim_{x\to\infty}\frac{x}{x^2} = \lim_{x\to\infty}\frac{1}{x} < \infty

Therefore,

x2=Ξ©(x)x^2 = \Omega(x)

Little Omega​

This notation is used to denote that one function grows stricly faster than another function.

f(x)=Ο‰(g(x))f(x) = \omega(g(x))

if and only if,

lim⁑xβ†’βˆžg(x)f(x)=0\lim_{x\to\infty} \frac{g(x)}{f(x)} =0

This is like the reverse of little Oh,

f(x)=Ο‰(g(x))Β ifΒ andΒ onlyΒ ifΒ g(x)=o(f(x))f(x) = \omega(g(x)) \text{ if and only if } g(x) = o(f(x))

For example,

x1.5=Ο‰(x)x^{1.5} = \omega(x)

This is true because

lim⁑xβ†’βˆžxx1.5=lim⁑xβ†’βˆžxβˆ’0.5=lim⁑xβ†’βˆž1x=0\lim_{x\to\infty} \frac{x}{x^{1.5}} = \lim_{x\to\infty} x^{-0.5} = \lim_{x\to\infty} \frac{1}{\sqrt{x}} = 0

Theta​

Sometimes, we want to specify both upper bound and lower bound at the same time. We use this notation,

f=Θ(g)f = \Theta(g)

if and only if,

f=O(g)Β andΒ g=O(f)f=O(g)\text{ and } g=O(f)

For example,

10n3βˆ’20n2+1=Θ(n3)10n^3-20n^2+1 = \Theta(n^3)

We then have to prove both conditions, i.e. f=O(g)f=O(g) and g=O(f)g=O(f). So we will start with the first one.

10n3βˆ’20n2+1=O(n3)10n^3-20n^2+1 = O(n^3)

This is true because

lim⁑xβ†’βˆžsup10n3βˆ’20n2+1n3=10<∞\lim_{x\to\infty} \text{sup} \frac{10n^3-20n^2+1}{n^3} = 10 < \infty

Now, we prove the second condition.

n3=O(10n3βˆ’20n2+1)n^3 = O(10n^3-20n^2+1 )

This is also true because

lim⁑xβ†’βˆžsupn310n3βˆ’20n2+1=110<∞\lim_{x\to\infty} \text{sup} \frac{n^3}{10n^3-20n^2+1} = \frac{1}{10} < \infty

Therefore,

10n3βˆ’20n2+1=Θ(n3)10n^3-20n^2+1 = \Theta(n^3)

Analogies with Relation Operators​

These asymptotic notations can be understood better in relation to analogies with relational operators of numbers.

relational operatorasymptotic notation
f=gf = gf=Θ(g)f = \Theta(g)
f<gf < gf=o(g)f = o(g)
f<=gf <= gf=O(g)f= O(g)
f>gf > gf=Ο‰(g)f = \omega(g)
f>=gf >= gf=Ξ©(g)f = \Omega(g)